I0429 22:11:56.717317 22 e2e.go:129] Starting e2e run "e1f0227b-afbd-44c9-96be-d75019a2ada3" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1651270315 - Will randomize all specs Will run 17 of 5773 specs Apr 29 22:11:56.802: INFO: >>> kubeConfig: /root/.kube/config Apr 29 22:11:56.807: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 29 22:11:56.835: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 29 22:11:56.901: INFO: The status of Pod cmk-init-discover-node1-gxlbt is Succeeded, skipping waiting Apr 29 22:11:56.901: INFO: The status of Pod cmk-init-discover-node2-csdn7 is Succeeded, skipping waiting Apr 29 22:11:56.901: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 29 22:11:56.901: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Apr 29 22:11:56.901: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 29 22:11:56.911: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Apr 29 22:11:56.911: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Apr 29 22:11:56.911: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Apr 29 22:11:56.911: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Apr 29 22:11:56.911: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Apr 29 22:11:56.911: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Apr 29 22:11:56.911: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Apr 29 22:11:56.911: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 29 22:11:56.911: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Apr 29 22:11:56.911: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Apr 29 22:11:56.911: INFO: e2e test version: v1.21.9 Apr 29 22:11:56.912: INFO: kube-apiserver version: v1.21.1 Apr 29 22:11:56.912: INFO: >>> kubeConfig: /root/.kube/config Apr 29 22:11:56.916: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:11:56.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred W0429 22:11:56.939728 22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 22:11:56.940: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 22:11:56.943: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 29 22:11:56.946: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 29 22:11:56.954: INFO: Waiting for terminating namespaces to be deleted... Apr 29 22:11:56.956: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 29 22:11:56.963: INFO: cmk-f5znp from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 22:11:56.963: INFO: Container nodereport ready: true, restart count 0 Apr 29 22:11:56.963: INFO: Container reconcile ready: true, restart count 0 Apr 29 22:11:56.963: INFO: cmk-init-discover-node1-gxlbt from kube-system started at 2022-04-29 20:11:43 +0000 UTC (3 container statuses recorded) Apr 29 22:11:56.963: INFO: Container discover ready: false, restart count 0 Apr 29 22:11:56.963: INFO: Container init ready: false, restart count 0 Apr 29 22:11:56.963: INFO: Container install ready: false, restart count 0 Apr 29 22:11:56.963: INFO: kube-flannel-47phs from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 22:11:56.963: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 22:11:56.963: INFO: kube-multus-ds-amd64-kkz4q from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 22:11:56.963: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:11:56.963: INFO: kube-proxy-v9tgj from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 22:11:56.963: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:11:56.963: INFO: kubernetes-dashboard-785dcbb76d-d2k5n from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 22:11:56.963: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 29 22:11:56.963: INFO: kubernetes-metrics-scraper-5558854cb-g47c2 from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 22:11:56.963: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 29 22:11:56.963: INFO: nginx-proxy-node1 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 22:11:56.963: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 22:11:56.964: INFO: node-feature-discovery-worker-kbl9s from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 22:11:56.964: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 22:11:56.964: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 22:11:56.964: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 22:11:56.964: INFO: collectd-ccgw2 from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 22:11:56.964: INFO: Container collectd ready: true, restart count 0 Apr 29 22:11:56.964: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 22:11:56.964: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 22:11:56.964: INFO: node-exporter-c8777 from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 22:11:56.964: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:11:56.964: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:11:56.964: INFO: prometheus-k8s-0 from monitoring started at 2022-04-29 20:13:38 +0000 UTC (4 container statuses recorded) Apr 29 22:11:56.964: INFO: Container config-reloader ready: true, restart count 0 Apr 29 22:11:56.964: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 29 22:11:56.964: INFO: Container grafana ready: true, restart count 0 Apr 29 22:11:56.964: INFO: Container prometheus ready: true, restart count 1 Apr 29 22:11:56.964: INFO: tas-telemetry-aware-scheduling-84ff454dfb-khdw5 from monitoring started at 2022-04-29 20:16:34 +0000 UTC (1 container statuses recorded) Apr 29 22:11:56.964: INFO: Container tas-extender ready: true, restart count 0 Apr 29 22:11:56.964: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 29 22:11:56.973: INFO: cmk-74bh9 from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 22:11:56.973: INFO: Container nodereport ready: true, restart count 0 Apr 29 22:11:56.973: INFO: Container reconcile ready: true, restart count 0 Apr 29 22:11:56.973: INFO: cmk-init-discover-node2-csdn7 from kube-system started at 2022-04-29 20:12:03 +0000 UTC (3 container statuses recorded) Apr 29 22:11:56.973: INFO: Container discover ready: false, restart count 0 Apr 29 22:11:56.973: INFO: Container init ready: false, restart count 0 Apr 29 22:11:56.973: INFO: Container install ready: false, restart count 0 Apr 29 22:11:56.973: INFO: cmk-webhook-6c9d5f8578-b9mdv from kube-system started at 2022-04-29 20:12:26 +0000 UTC (1 container statuses recorded) Apr 29 22:11:56.973: INFO: Container cmk-webhook ready: true, restart count 0 Apr 29 22:11:56.973: INFO: kube-flannel-dbcj8 from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 22:11:56.973: INFO: Container kube-flannel ready: true, restart count 3 Apr 29 22:11:56.973: INFO: kube-multus-ds-amd64-7slcd from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 22:11:56.973: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:11:56.973: INFO: kube-proxy-k6tv2 from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 22:11:56.973: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:11:56.973: INFO: nginx-proxy-node2 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 22:11:56.973: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 22:11:56.973: INFO: node-feature-discovery-worker-jtjjb from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 22:11:56.973: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 22:11:56.973: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 22:11:56.973: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 22:11:56.973: INFO: collectd-zxs8j from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 22:11:56.973: INFO: Container collectd ready: true, restart count 0 Apr 29 22:11:56.973: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 22:11:56.973: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 22:11:56.973: INFO: node-exporter-tlpmt from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 22:11:56.973: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:11:56.973: INFO: Container node-exporter ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-f3653140-3060-42dc-9b5b-0c707c77c269 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.10.190.208 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-f3653140-3060-42dc-9b5b-0c707c77c269 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-f3653140-3060-42dc-9b5b-0c707c77c269 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:17:05.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3018" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.166 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":17,"completed":1,"skipped":347,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:17:05.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:17:05.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-411" for this suite. STEP: Destroying namespace "nspatchtest-4ad29f6d-00d5-43f1-9aed-0772cafb2b63-6859" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":17,"completed":2,"skipped":421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:17:05.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:17:05.202: INFO: Create a RollingUpdate DaemonSet Apr 29 22:17:05.206: INFO: Check that daemon pods launch on every node of the cluster Apr 29 22:17:05.210: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:05.210: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:05.210: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:05.213: INFO: Number of nodes with available pods: 0 Apr 29 22:17:05.213: INFO: Node node1 is running more than one daemon pod Apr 29 22:17:06.218: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:06.218: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:06.218: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:06.221: INFO: Number of nodes with available pods: 0 Apr 29 22:17:06.221: INFO: Node node1 is running more than one daemon pod Apr 29 22:17:07.219: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:07.220: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:07.220: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:07.222: INFO: Number of nodes with available pods: 0 Apr 29 22:17:07.222: INFO: Node node1 is running more than one daemon pod Apr 29 22:17:08.217: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:08.217: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:08.217: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:08.220: INFO: Number of nodes with available pods: 2 Apr 29 22:17:08.220: INFO: Number of running nodes: 2, number of available pods: 2 Apr 29 22:17:08.220: INFO: Update the DaemonSet to trigger a rollout Apr 29 22:17:08.226: INFO: Updating DaemonSet daemon-set Apr 29 22:17:15.241: INFO: Roll back the DaemonSet before rollout is complete Apr 29 22:17:15.248: INFO: Updating DaemonSet daemon-set Apr 29 22:17:15.248: INFO: Make sure DaemonSet rollback is complete Apr 29 22:17:15.251: INFO: Wrong image for pod: daemon-set-sf65x. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Apr 29 22:17:15.251: INFO: Pod daemon-set-sf65x is not available Apr 29 22:17:15.256: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:15.256: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:15.256: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:16.269: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:16.269: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:16.269: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:17.266: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:17.266: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:17.266: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:18.265: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:18.265: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:18.265: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:19.268: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:19.268: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:19.268: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:20.264: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:20.265: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:20.265: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:21.265: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:21.265: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:21.265: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:22.266: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:22.266: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:22.267: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:23.266: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:23.266: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:23.267: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:24.268: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:24.268: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:24.268: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:25.260: INFO: Pod daemon-set-dvvc5 is not available Apr 29 22:17:25.265: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:25.265: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:17:25.265: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6788, will wait for the garbage collector to delete the pods Apr 29 22:17:25.330: INFO: Deleting DaemonSet.extensions daemon-set took: 3.854292ms Apr 29 22:17:25.430: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.291219ms Apr 29 22:17:35.234: INFO: Number of nodes with available pods: 0 Apr 29 22:17:35.234: INFO: Number of running nodes: 0, number of available pods: 0 Apr 29 22:17:35.241: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"52070"},"items":null} Apr 29 22:17:35.244: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"52070"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:17:35.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6788" for this suite. • [SLOW TEST:30.100 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":17,"completed":3,"skipped":1521,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:17:35.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 29 22:17:35.306: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 29 22:17:35.314: INFO: Waiting for terminating namespaces to be deleted... Apr 29 22:17:35.316: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 29 22:17:35.332: INFO: cmk-f5znp from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 22:17:35.332: INFO: Container nodereport ready: true, restart count 0 Apr 29 22:17:35.332: INFO: Container reconcile ready: true, restart count 0 Apr 29 22:17:35.332: INFO: cmk-init-discover-node1-gxlbt from kube-system started at 2022-04-29 20:11:43 +0000 UTC (3 container statuses recorded) Apr 29 22:17:35.332: INFO: Container discover ready: false, restart count 0 Apr 29 22:17:35.332: INFO: Container init ready: false, restart count 0 Apr 29 22:17:35.332: INFO: Container install ready: false, restart count 0 Apr 29 22:17:35.332: INFO: kube-flannel-47phs from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 22:17:35.332: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 22:17:35.332: INFO: kube-multus-ds-amd64-kkz4q from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 22:17:35.332: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:17:35.332: INFO: kube-proxy-v9tgj from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 22:17:35.332: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:17:35.332: INFO: kubernetes-dashboard-785dcbb76d-d2k5n from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 22:17:35.332: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 29 22:17:35.332: INFO: kubernetes-metrics-scraper-5558854cb-g47c2 from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 22:17:35.332: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 29 22:17:35.332: INFO: nginx-proxy-node1 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 22:17:35.332: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 22:17:35.332: INFO: node-feature-discovery-worker-kbl9s from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 22:17:35.332: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 22:17:35.332: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 22:17:35.332: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 22:17:35.332: INFO: collectd-ccgw2 from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 22:17:35.332: INFO: Container collectd ready: true, restart count 0 Apr 29 22:17:35.332: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 22:17:35.332: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 22:17:35.332: INFO: node-exporter-c8777 from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 22:17:35.332: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:17:35.332: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:17:35.332: INFO: prometheus-k8s-0 from monitoring started at 2022-04-29 20:13:38 +0000 UTC (4 container statuses recorded) Apr 29 22:17:35.332: INFO: Container config-reloader ready: true, restart count 0 Apr 29 22:17:35.332: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 29 22:17:35.332: INFO: Container grafana ready: true, restart count 0 Apr 29 22:17:35.332: INFO: Container prometheus ready: true, restart count 1 Apr 29 22:17:35.333: INFO: tas-telemetry-aware-scheduling-84ff454dfb-khdw5 from monitoring started at 2022-04-29 20:16:34 +0000 UTC (1 container statuses recorded) Apr 29 22:17:35.333: INFO: Container tas-extender ready: true, restart count 0 Apr 29 22:17:35.333: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 29 22:17:35.350: INFO: cmk-74bh9 from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 22:17:35.350: INFO: Container nodereport ready: true, restart count 0 Apr 29 22:17:35.350: INFO: Container reconcile ready: true, restart count 0 Apr 29 22:17:35.350: INFO: cmk-init-discover-node2-csdn7 from kube-system started at 2022-04-29 20:12:03 +0000 UTC (3 container statuses recorded) Apr 29 22:17:35.350: INFO: Container discover ready: false, restart count 0 Apr 29 22:17:35.350: INFO: Container init ready: false, restart count 0 Apr 29 22:17:35.350: INFO: Container install ready: false, restart count 0 Apr 29 22:17:35.350: INFO: cmk-webhook-6c9d5f8578-b9mdv from kube-system started at 2022-04-29 20:12:26 +0000 UTC (1 container statuses recorded) Apr 29 22:17:35.350: INFO: Container cmk-webhook ready: true, restart count 0 Apr 29 22:17:35.350: INFO: kube-flannel-dbcj8 from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 22:17:35.350: INFO: Container kube-flannel ready: true, restart count 3 Apr 29 22:17:35.350: INFO: kube-multus-ds-amd64-7slcd from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 22:17:35.350: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:17:35.350: INFO: kube-proxy-k6tv2 from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 22:17:35.350: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:17:35.350: INFO: nginx-proxy-node2 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 22:17:35.350: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 22:17:35.350: INFO: node-feature-discovery-worker-jtjjb from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 22:17:35.350: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 22:17:35.350: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 22:17:35.350: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 22:17:35.350: INFO: collectd-zxs8j from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 22:17:35.350: INFO: Container collectd ready: true, restart count 0 Apr 29 22:17:35.350: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 22:17:35.350: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 22:17:35.350: INFO: node-exporter-tlpmt from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 22:17:35.350: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:17:35.350: INFO: Container node-exporter ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16ea7de989ec607a], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:17:36.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8808" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":17,"completed":4,"skipped":2296,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:17:36.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 29 22:17:36.709: INFO: Pod name wrapped-volume-race-9a0224df-9f8b-4ab6-8801-a862ea8732f4: Found 2 pods out of 5 Apr 29 22:17:41.716: INFO: Pod name wrapped-volume-race-9a0224df-9f8b-4ab6-8801-a862ea8732f4: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9a0224df-9f8b-4ab6-8801-a862ea8732f4 in namespace emptydir-wrapper-7107, will wait for the garbage collector to delete the pods Apr 29 22:18:05.805: INFO: Deleting ReplicationController wrapped-volume-race-9a0224df-9f8b-4ab6-8801-a862ea8732f4 took: 5.818347ms Apr 29 22:18:05.906: INFO: Terminating ReplicationController wrapped-volume-race-9a0224df-9f8b-4ab6-8801-a862ea8732f4 pods took: 100.663919ms STEP: Creating RC which spawns configmap-volume pods Apr 29 22:18:15.222: INFO: Pod name wrapped-volume-race-594964e6-ea56-41d1-8cc8-7601447295fa: Found 0 pods out of 5 Apr 29 22:18:20.233: INFO: Pod name wrapped-volume-race-594964e6-ea56-41d1-8cc8-7601447295fa: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-594964e6-ea56-41d1-8cc8-7601447295fa in namespace emptydir-wrapper-7107, will wait for the garbage collector to delete the pods Apr 29 22:18:34.323: INFO: Deleting ReplicationController wrapped-volume-race-594964e6-ea56-41d1-8cc8-7601447295fa took: 8.57042ms Apr 29 22:18:34.424: INFO: Terminating ReplicationController wrapped-volume-race-594964e6-ea56-41d1-8cc8-7601447295fa pods took: 101.039417ms STEP: Creating RC which spawns configmap-volume pods Apr 29 22:18:45.343: INFO: Pod name wrapped-volume-race-9e99d421-5197-4266-bae9-bf2114c1de39: Found 0 pods out of 5 Apr 29 22:18:50.352: INFO: Pod name wrapped-volume-race-9e99d421-5197-4266-bae9-bf2114c1de39: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9e99d421-5197-4266-bae9-bf2114c1de39 in namespace emptydir-wrapper-7107, will wait for the garbage collector to delete the pods Apr 29 22:19:04.431: INFO: Deleting ReplicationController wrapped-volume-race-9e99d421-5197-4266-bae9-bf2114c1de39 took: 3.944118ms Apr 29 22:19:04.531: INFO: Terminating ReplicationController wrapped-volume-race-9e99d421-5197-4266-bae9-bf2114c1de39 pods took: 100.374608ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:19:15.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7107" for this suite. • [SLOW TEST:99.131 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":17,"completed":5,"skipped":2544,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:19:15.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 29 22:19:15.580: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:15.580: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:15.580: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:15.582: INFO: Number of nodes with available pods: 0 Apr 29 22:19:15.582: INFO: Node node1 is running more than one daemon pod Apr 29 22:19:16.588: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:16.588: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:16.588: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:16.592: INFO: Number of nodes with available pods: 0 Apr 29 22:19:16.592: INFO: Node node1 is running more than one daemon pod Apr 29 22:19:17.588: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:17.588: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:17.588: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:17.591: INFO: Number of nodes with available pods: 0 Apr 29 22:19:17.591: INFO: Node node1 is running more than one daemon pod Apr 29 22:19:18.589: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:18.590: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:18.590: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:18.594: INFO: Number of nodes with available pods: 1 Apr 29 22:19:18.594: INFO: Node node2 is running more than one daemon pod Apr 29 22:19:19.590: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:19.590: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:19.590: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:19.593: INFO: Number of nodes with available pods: 2 Apr 29 22:19:19.593: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 29 22:19:19.605: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:19.605: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:19.605: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:19.608: INFO: Number of nodes with available pods: 1 Apr 29 22:19:19.608: INFO: Node node2 is running more than one daemon pod Apr 29 22:19:20.613: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:20.613: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:20.613: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:20.616: INFO: Number of nodes with available pods: 1 Apr 29 22:19:20.616: INFO: Node node2 is running more than one daemon pod Apr 29 22:19:21.614: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:21.614: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:21.614: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:21.617: INFO: Number of nodes with available pods: 1 Apr 29 22:19:21.617: INFO: Node node2 is running more than one daemon pod Apr 29 22:19:22.615: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:22.615: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:22.615: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:22.619: INFO: Number of nodes with available pods: 1 Apr 29 22:19:22.619: INFO: Node node2 is running more than one daemon pod Apr 29 22:19:23.616: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:23.617: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:23.617: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:23.619: INFO: Number of nodes with available pods: 1 Apr 29 22:19:23.619: INFO: Node node2 is running more than one daemon pod Apr 29 22:19:24.615: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:24.616: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:24.616: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:24.619: INFO: Number of nodes with available pods: 1 Apr 29 22:19:24.619: INFO: Node node2 is running more than one daemon pod Apr 29 22:19:25.613: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:25.613: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:25.613: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:25.615: INFO: Number of nodes with available pods: 1 Apr 29 22:19:25.615: INFO: Node node2 is running more than one daemon pod Apr 29 22:19:26.616: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:26.616: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:26.616: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:26.619: INFO: Number of nodes with available pods: 1 Apr 29 22:19:26.619: INFO: Node node2 is running more than one daemon pod Apr 29 22:19:27.614: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:27.614: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:27.614: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:27.617: INFO: Number of nodes with available pods: 1 Apr 29 22:19:27.617: INFO: Node node2 is running more than one daemon pod Apr 29 22:19:28.615: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:28.615: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:28.615: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:28.618: INFO: Number of nodes with available pods: 1 Apr 29 22:19:28.618: INFO: Node node2 is running more than one daemon pod Apr 29 22:19:29.615: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:29.615: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:29.615: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:19:29.618: INFO: Number of nodes with available pods: 2 Apr 29 22:19:29.618: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7982, will wait for the garbage collector to delete the pods Apr 29 22:19:29.682: INFO: Deleting DaemonSet.extensions daemon-set took: 8.520885ms Apr 29 22:19:29.783: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.056877ms Apr 29 22:19:35.287: INFO: Number of nodes with available pods: 0 Apr 29 22:19:35.287: INFO: Number of running nodes: 0, number of available pods: 0 Apr 29 22:19:35.289: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"53338"},"items":null} Apr 29 22:19:35.292: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"53338"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:19:35.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7982" for this suite. • [SLOW TEST:19.776 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":17,"completed":6,"skipped":2576,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:19:35.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 29 22:19:35.343: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 29 22:19:35.352: INFO: Waiting for terminating namespaces to be deleted... Apr 29 22:19:35.354: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 29 22:19:35.367: INFO: cmk-f5znp from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 22:19:35.367: INFO: Container nodereport ready: true, restart count 0 Apr 29 22:19:35.367: INFO: Container reconcile ready: true, restart count 0 Apr 29 22:19:35.367: INFO: cmk-init-discover-node1-gxlbt from kube-system started at 2022-04-29 20:11:43 +0000 UTC (3 container statuses recorded) Apr 29 22:19:35.367: INFO: Container discover ready: false, restart count 0 Apr 29 22:19:35.367: INFO: Container init ready: false, restart count 0 Apr 29 22:19:35.367: INFO: Container install ready: false, restart count 0 Apr 29 22:19:35.367: INFO: kube-flannel-47phs from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 22:19:35.367: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 22:19:35.367: INFO: kube-multus-ds-amd64-kkz4q from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 22:19:35.367: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:19:35.367: INFO: kube-proxy-v9tgj from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 22:19:35.367: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:19:35.367: INFO: kubernetes-dashboard-785dcbb76d-d2k5n from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 22:19:35.367: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 29 22:19:35.367: INFO: kubernetes-metrics-scraper-5558854cb-g47c2 from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 22:19:35.367: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 29 22:19:35.367: INFO: nginx-proxy-node1 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 22:19:35.368: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 22:19:35.368: INFO: node-feature-discovery-worker-kbl9s from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 22:19:35.368: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 22:19:35.368: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 22:19:35.368: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 22:19:35.368: INFO: collectd-ccgw2 from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 22:19:35.368: INFO: Container collectd ready: true, restart count 0 Apr 29 22:19:35.368: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 22:19:35.368: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 22:19:35.368: INFO: node-exporter-c8777 from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 22:19:35.368: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:19:35.368: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:19:35.368: INFO: prometheus-k8s-0 from monitoring started at 2022-04-29 20:13:38 +0000 UTC (4 container statuses recorded) Apr 29 22:19:35.368: INFO: Container config-reloader ready: true, restart count 0 Apr 29 22:19:35.368: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 29 22:19:35.368: INFO: Container grafana ready: true, restart count 0 Apr 29 22:19:35.368: INFO: Container prometheus ready: true, restart count 1 Apr 29 22:19:35.368: INFO: tas-telemetry-aware-scheduling-84ff454dfb-khdw5 from monitoring started at 2022-04-29 20:16:34 +0000 UTC (1 container statuses recorded) Apr 29 22:19:35.368: INFO: Container tas-extender ready: true, restart count 0 Apr 29 22:19:35.368: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 29 22:19:35.378: INFO: cmk-74bh9 from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 22:19:35.378: INFO: Container nodereport ready: true, restart count 0 Apr 29 22:19:35.378: INFO: Container reconcile ready: true, restart count 0 Apr 29 22:19:35.378: INFO: cmk-init-discover-node2-csdn7 from kube-system started at 2022-04-29 20:12:03 +0000 UTC (3 container statuses recorded) Apr 29 22:19:35.378: INFO: Container discover ready: false, restart count 0 Apr 29 22:19:35.378: INFO: Container init ready: false, restart count 0 Apr 29 22:19:35.378: INFO: Container install ready: false, restart count 0 Apr 29 22:19:35.378: INFO: cmk-webhook-6c9d5f8578-b9mdv from kube-system started at 2022-04-29 20:12:26 +0000 UTC (1 container statuses recorded) Apr 29 22:19:35.378: INFO: Container cmk-webhook ready: true, restart count 0 Apr 29 22:19:35.378: INFO: kube-flannel-dbcj8 from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 22:19:35.378: INFO: Container kube-flannel ready: true, restart count 3 Apr 29 22:19:35.378: INFO: kube-multus-ds-amd64-7slcd from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 22:19:35.378: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:19:35.378: INFO: kube-proxy-k6tv2 from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 22:19:35.378: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:19:35.378: INFO: nginx-proxy-node2 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 22:19:35.378: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 22:19:35.378: INFO: node-feature-discovery-worker-jtjjb from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 22:19:35.378: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 22:19:35.379: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 22:19:35.379: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 22:19:35.379: INFO: collectd-zxs8j from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 22:19:35.379: INFO: Container collectd ready: true, restart count 0 Apr 29 22:19:35.379: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 22:19:35.379: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 22:19:35.379: INFO: node-exporter-tlpmt from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 22:19:35.379: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:19:35.379: INFO: Container node-exporter ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c09d5a93-438f-4dd3-bd01-2d620329efb4 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-c09d5a93-438f-4dd3-bd01-2d620329efb4 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-c09d5a93-438f-4dd3-bd01-2d620329efb4 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:19:43.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-735" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.145 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":17,"completed":7,"skipped":3236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:19:43.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Apr 29 22:19:43.501: INFO: Waiting up to 1m0s for all nodes to be ready Apr 29 22:20:43.558: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:20:43.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:20:43.592: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Apr 29 22:20:43.595: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:20:43.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-6370" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:20:43.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-115" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.205 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":17,"completed":8,"skipped":3703,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:20:43.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 29 22:20:43.726: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:43.726: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:43.726: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:43.728: INFO: Number of nodes with available pods: 0 Apr 29 22:20:43.728: INFO: Node node1 is running more than one daemon pod Apr 29 22:20:44.733: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:44.733: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:44.733: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:44.735: INFO: Number of nodes with available pods: 0 Apr 29 22:20:44.736: INFO: Node node1 is running more than one daemon pod Apr 29 22:20:45.734: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:45.734: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:45.734: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:45.737: INFO: Number of nodes with available pods: 0 Apr 29 22:20:45.737: INFO: Node node1 is running more than one daemon pod Apr 29 22:20:46.736: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:46.736: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:46.736: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:46.739: INFO: Number of nodes with available pods: 1 Apr 29 22:20:46.739: INFO: Node node1 is running more than one daemon pod Apr 29 22:20:47.734: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:47.734: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:47.735: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:47.737: INFO: Number of nodes with available pods: 2 Apr 29 22:20:47.737: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 29 22:20:47.755: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:47.755: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:47.755: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:47.758: INFO: Number of nodes with available pods: 1 Apr 29 22:20:47.758: INFO: Node node2 is running more than one daemon pod Apr 29 22:20:48.764: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:48.764: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:48.764: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:48.767: INFO: Number of nodes with available pods: 1 Apr 29 22:20:48.767: INFO: Node node2 is running more than one daemon pod Apr 29 22:20:49.765: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:49.765: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:49.765: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:49.768: INFO: Number of nodes with available pods: 1 Apr 29 22:20:49.768: INFO: Node node2 is running more than one daemon pod Apr 29 22:20:50.763: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:50.763: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:50.763: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:50.766: INFO: Number of nodes with available pods: 1 Apr 29 22:20:50.766: INFO: Node node2 is running more than one daemon pod Apr 29 22:20:51.763: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:51.764: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:51.764: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:20:51.766: INFO: Number of nodes with available pods: 2 Apr 29 22:20:51.766: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3664, will wait for the garbage collector to delete the pods Apr 29 22:20:51.830: INFO: Deleting DaemonSet.extensions daemon-set took: 5.404108ms Apr 29 22:20:51.931: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.017844ms Apr 29 22:21:05.234: INFO: Number of nodes with available pods: 0 Apr 29 22:21:05.234: INFO: Number of running nodes: 0, number of available pods: 0 Apr 29 22:21:05.236: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"53747"},"items":null} Apr 29 22:21:05.238: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"53747"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:21:05.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3664" for this suite. • [SLOW TEST:21.576 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":17,"completed":9,"skipped":3927,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:21:05.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 29 22:21:05.283: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 29 22:21:05.291: INFO: Waiting for terminating namespaces to be deleted... Apr 29 22:21:05.294: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 29 22:21:05.306: INFO: cmk-f5znp from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 22:21:05.306: INFO: Container nodereport ready: true, restart count 0 Apr 29 22:21:05.306: INFO: Container reconcile ready: true, restart count 0 Apr 29 22:21:05.306: INFO: cmk-init-discover-node1-gxlbt from kube-system started at 2022-04-29 20:11:43 +0000 UTC (3 container statuses recorded) Apr 29 22:21:05.306: INFO: Container discover ready: false, restart count 0 Apr 29 22:21:05.306: INFO: Container init ready: false, restart count 0 Apr 29 22:21:05.306: INFO: Container install ready: false, restart count 0 Apr 29 22:21:05.306: INFO: kube-flannel-47phs from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 22:21:05.306: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 22:21:05.306: INFO: kube-multus-ds-amd64-kkz4q from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 22:21:05.306: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:21:05.306: INFO: kube-proxy-v9tgj from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 22:21:05.306: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:21:05.306: INFO: kubernetes-dashboard-785dcbb76d-d2k5n from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 22:21:05.306: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 29 22:21:05.306: INFO: kubernetes-metrics-scraper-5558854cb-g47c2 from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 22:21:05.306: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 29 22:21:05.306: INFO: nginx-proxy-node1 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 22:21:05.306: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 22:21:05.306: INFO: node-feature-discovery-worker-kbl9s from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 22:21:05.306: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 22:21:05.306: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 22:21:05.306: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 22:21:05.306: INFO: collectd-ccgw2 from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 22:21:05.306: INFO: Container collectd ready: true, restart count 0 Apr 29 22:21:05.306: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 22:21:05.306: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 22:21:05.306: INFO: node-exporter-c8777 from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 22:21:05.306: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:21:05.306: INFO: Container node-exporter ready: true, restart count 0 Apr 29 22:21:05.306: INFO: prometheus-k8s-0 from monitoring started at 2022-04-29 20:13:38 +0000 UTC (4 container statuses recorded) Apr 29 22:21:05.306: INFO: Container config-reloader ready: true, restart count 0 Apr 29 22:21:05.306: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 29 22:21:05.306: INFO: Container grafana ready: true, restart count 0 Apr 29 22:21:05.306: INFO: Container prometheus ready: true, restart count 1 Apr 29 22:21:05.306: INFO: tas-telemetry-aware-scheduling-84ff454dfb-khdw5 from monitoring started at 2022-04-29 20:16:34 +0000 UTC (1 container statuses recorded) Apr 29 22:21:05.306: INFO: Container tas-extender ready: true, restart count 0 Apr 29 22:21:05.306: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 29 22:21:05.325: INFO: cmk-74bh9 from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 22:21:05.325: INFO: Container nodereport ready: true, restart count 0 Apr 29 22:21:05.325: INFO: Container reconcile ready: true, restart count 0 Apr 29 22:21:05.325: INFO: cmk-init-discover-node2-csdn7 from kube-system started at 2022-04-29 20:12:03 +0000 UTC (3 container statuses recorded) Apr 29 22:21:05.325: INFO: Container discover ready: false, restart count 0 Apr 29 22:21:05.325: INFO: Container init ready: false, restart count 0 Apr 29 22:21:05.325: INFO: Container install ready: false, restart count 0 Apr 29 22:21:05.325: INFO: cmk-webhook-6c9d5f8578-b9mdv from kube-system started at 2022-04-29 20:12:26 +0000 UTC (1 container statuses recorded) Apr 29 22:21:05.325: INFO: Container cmk-webhook ready: true, restart count 0 Apr 29 22:21:05.325: INFO: kube-flannel-dbcj8 from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 22:21:05.325: INFO: Container kube-flannel ready: true, restart count 3 Apr 29 22:21:05.325: INFO: kube-multus-ds-amd64-7slcd from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 22:21:05.325: INFO: Container kube-multus ready: true, restart count 1 Apr 29 22:21:05.325: INFO: kube-proxy-k6tv2 from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 22:21:05.325: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 22:21:05.325: INFO: nginx-proxy-node2 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 22:21:05.325: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 22:21:05.325: INFO: node-feature-discovery-worker-jtjjb from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 22:21:05.325: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 22:21:05.325: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 22:21:05.325: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 22:21:05.325: INFO: collectd-zxs8j from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 22:21:05.325: INFO: Container collectd ready: true, restart count 0 Apr 29 22:21:05.325: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 22:21:05.325: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 22:21:05.325: INFO: node-exporter-tlpmt from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 22:21:05.325: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 22:21:05.325: INFO: Container node-exporter ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: verifying the node has the label node node1 STEP: verifying the node has the label node node2 Apr 29 22:21:05.377: INFO: Pod cmk-74bh9 requesting resource cpu=0m on Node node2 Apr 29 22:21:05.377: INFO: Pod cmk-f5znp requesting resource cpu=0m on Node node1 Apr 29 22:21:05.377: INFO: Pod cmk-webhook-6c9d5f8578-b9mdv requesting resource cpu=0m on Node node2 Apr 29 22:21:05.377: INFO: Pod kube-flannel-47phs requesting resource cpu=150m on Node node1 Apr 29 22:21:05.377: INFO: Pod kube-flannel-dbcj8 requesting resource cpu=150m on Node node2 Apr 29 22:21:05.377: INFO: Pod kube-multus-ds-amd64-7slcd requesting resource cpu=100m on Node node2 Apr 29 22:21:05.377: INFO: Pod kube-multus-ds-amd64-kkz4q requesting resource cpu=100m on Node node1 Apr 29 22:21:05.377: INFO: Pod kube-proxy-k6tv2 requesting resource cpu=0m on Node node2 Apr 29 22:21:05.377: INFO: Pod kube-proxy-v9tgj requesting resource cpu=0m on Node node1 Apr 29 22:21:05.377: INFO: Pod kubernetes-dashboard-785dcbb76d-d2k5n requesting resource cpu=50m on Node node1 Apr 29 22:21:05.377: INFO: Pod kubernetes-metrics-scraper-5558854cb-g47c2 requesting resource cpu=0m on Node node1 Apr 29 22:21:05.377: INFO: Pod nginx-proxy-node1 requesting resource cpu=25m on Node node1 Apr 29 22:21:05.377: INFO: Pod nginx-proxy-node2 requesting resource cpu=25m on Node node2 Apr 29 22:21:05.377: INFO: Pod node-feature-discovery-worker-jtjjb requesting resource cpu=0m on Node node2 Apr 29 22:21:05.377: INFO: Pod node-feature-discovery-worker-kbl9s requesting resource cpu=0m on Node node1 Apr 29 22:21:05.377: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq requesting resource cpu=0m on Node node1 Apr 29 22:21:05.377: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 requesting resource cpu=0m on Node node2 Apr 29 22:21:05.377: INFO: Pod collectd-ccgw2 requesting resource cpu=0m on Node node1 Apr 29 22:21:05.377: INFO: Pod collectd-zxs8j requesting resource cpu=0m on Node node2 Apr 29 22:21:05.377: INFO: Pod node-exporter-c8777 requesting resource cpu=112m on Node node1 Apr 29 22:21:05.377: INFO: Pod node-exporter-tlpmt requesting resource cpu=112m on Node node2 Apr 29 22:21:05.377: INFO: Pod prometheus-k8s-0 requesting resource cpu=200m on Node node1 Apr 29 22:21:05.377: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-khdw5 requesting resource cpu=0m on Node node1 STEP: Starting Pods to consume most of the cluster CPU. Apr 29 22:21:05.377: INFO: Creating a pod which consumes cpu=53454m on Node node1 Apr 29 22:21:05.388: INFO: Creating a pod which consumes cpu=53629m on Node node2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-0e9c41be-9f62-45aa-b4ae-5b117a21c02e.16ea7e1a6fd87b81], Reason = [Scheduled], Message = [Successfully assigned sched-pred-535/filler-pod-0e9c41be-9f62-45aa-b4ae-5b117a21c02e to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-0e9c41be-9f62-45aa-b4ae-5b117a21c02e.16ea7e1ac6fef30e], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-0e9c41be-9f62-45aa-b4ae-5b117a21c02e.16ea7e1ade8374c5], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 394.547824ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-0e9c41be-9f62-45aa-b4ae-5b117a21c02e.16ea7e1ae57a864e], Reason = [Created], Message = [Created container filler-pod-0e9c41be-9f62-45aa-b4ae-5b117a21c02e] STEP: Considering event: Type = [Normal], Name = [filler-pod-0e9c41be-9f62-45aa-b4ae-5b117a21c02e.16ea7e1aecbe4198], Reason = [Started], Message = [Started container filler-pod-0e9c41be-9f62-45aa-b4ae-5b117a21c02e] STEP: Considering event: Type = [Normal], Name = [filler-pod-a09912e5-92e3-424c-bc38-d3425195aa1d.16ea7e1a6f98a413], Reason = [Scheduled], Message = [Successfully assigned sched-pred-535/filler-pod-a09912e5-92e3-424c-bc38-d3425195aa1d to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-a09912e5-92e3-424c-bc38-d3425195aa1d.16ea7e1ac4da61b5], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-a09912e5-92e3-424c-bc38-d3425195aa1d.16ea7e1ad7d19dce], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 318.183564ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-a09912e5-92e3-424c-bc38-d3425195aa1d.16ea7e1adec61759], Reason = [Created], Message = [Created container filler-pod-a09912e5-92e3-424c-bc38-d3425195aa1d] STEP: Considering event: Type = [Normal], Name = [filler-pod-a09912e5-92e3-424c-bc38-d3425195aa1d.16ea7e1b027332b2], Reason = [Started], Message = [Started container filler-pod-a09912e5-92e3-424c-bc38-d3425195aa1d] STEP: Considering event: Type = [Warning], Name = [additional-pod.16ea7e1b5f947cad], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: removing the label node off the node node1 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node node2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:21:10.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-535" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:5.199 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":17,"completed":10,"skipped":3991,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:21:10.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:21:16.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8926" for this suite. STEP: Destroying namespace "nsdeletetest-7443" for this suite. Apr 29 22:21:16.548: INFO: Namespace nsdeletetest-7443 was already deleted STEP: Destroying namespace "nsdeletetest-7997" for this suite. • [SLOW TEST:6.097 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":17,"completed":11,"skipped":4085,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:21:16.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:21:16.597: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 29 22:21:16.602: INFO: Number of nodes with available pods: 0 Apr 29 22:21:16.602: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 29 22:21:16.618: INFO: Number of nodes with available pods: 0 Apr 29 22:21:16.618: INFO: Node node2 is running more than one daemon pod Apr 29 22:21:17.623: INFO: Number of nodes with available pods: 0 Apr 29 22:21:17.623: INFO: Node node2 is running more than one daemon pod Apr 29 22:21:18.622: INFO: Number of nodes with available pods: 0 Apr 29 22:21:18.622: INFO: Node node2 is running more than one daemon pod Apr 29 22:21:19.621: INFO: Number of nodes with available pods: 1 Apr 29 22:21:19.621: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 29 22:21:19.637: INFO: Number of nodes with available pods: 1 Apr 29 22:21:19.637: INFO: Number of running nodes: 0, number of available pods: 1 Apr 29 22:21:20.641: INFO: Number of nodes with available pods: 0 Apr 29 22:21:20.641: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 29 22:21:20.652: INFO: Number of nodes with available pods: 0 Apr 29 22:21:20.652: INFO: Node node2 is running more than one daemon pod Apr 29 22:21:21.656: INFO: Number of nodes with available pods: 0 Apr 29 22:21:21.656: INFO: Node node2 is running more than one daemon pod Apr 29 22:21:22.655: INFO: Number of nodes with available pods: 0 Apr 29 22:21:22.655: INFO: Node node2 is running more than one daemon pod Apr 29 22:21:23.658: INFO: Number of nodes with available pods: 0 Apr 29 22:21:23.658: INFO: Node node2 is running more than one daemon pod Apr 29 22:21:24.658: INFO: Number of nodes with available pods: 0 Apr 29 22:21:24.658: INFO: Node node2 is running more than one daemon pod Apr 29 22:21:25.657: INFO: Number of nodes with available pods: 0 Apr 29 22:21:25.657: INFO: Node node2 is running more than one daemon pod Apr 29 22:21:26.658: INFO: Number of nodes with available pods: 0 Apr 29 22:21:26.658: INFO: Node node2 is running more than one daemon pod Apr 29 22:21:27.656: INFO: Number of nodes with available pods: 1 Apr 29 22:21:27.656: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1656, will wait for the garbage collector to delete the pods Apr 29 22:21:27.719: INFO: Deleting DaemonSet.extensions daemon-set took: 4.646432ms Apr 29 22:21:27.820: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.666093ms Apr 29 22:21:35.222: INFO: Number of nodes with available pods: 0 Apr 29 22:21:35.222: INFO: Number of running nodes: 0, number of available pods: 0 Apr 29 22:21:35.224: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"54004"},"items":null} Apr 29 22:21:35.226: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"54004"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:21:35.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1656" for this suite. • [SLOW TEST:18.691 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":17,"completed":12,"skipped":4213,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:21:35.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:21:35.291: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 29 22:21:35.299: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:35.299: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:35.300: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:35.301: INFO: Number of nodes with available pods: 0 Apr 29 22:21:35.302: INFO: Node node1 is running more than one daemon pod Apr 29 22:21:36.307: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:36.307: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:36.307: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:36.309: INFO: Number of nodes with available pods: 0 Apr 29 22:21:36.309: INFO: Node node1 is running more than one daemon pod Apr 29 22:21:37.307: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:37.307: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:37.307: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:37.310: INFO: Number of nodes with available pods: 0 Apr 29 22:21:37.310: INFO: Node node1 is running more than one daemon pod Apr 29 22:21:38.313: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:38.313: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:38.313: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:38.317: INFO: Number of nodes with available pods: 1 Apr 29 22:21:38.317: INFO: Node node1 is running more than one daemon pod Apr 29 22:21:39.307: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:39.307: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:39.307: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:39.310: INFO: Number of nodes with available pods: 2 Apr 29 22:21:39.310: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 29 22:21:39.334: INFO: Wrong image for pod: daemon-set-5gh89. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Apr 29 22:21:39.334: INFO: Wrong image for pod: daemon-set-b9psn. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Apr 29 22:21:39.338: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:39.338: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:39.338: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:40.344: INFO: Wrong image for pod: daemon-set-b9psn. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Apr 29 22:21:40.347: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:40.347: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:40.347: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:41.344: INFO: Wrong image for pod: daemon-set-b9psn. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Apr 29 22:21:41.349: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:41.349: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:41.349: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:42.342: INFO: Wrong image for pod: daemon-set-b9psn. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Apr 29 22:21:42.347: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:42.347: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:42.347: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:43.345: INFO: Wrong image for pod: daemon-set-b9psn. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Apr 29 22:21:43.349: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:43.349: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:43.349: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:44.344: INFO: Wrong image for pod: daemon-set-b9psn. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Apr 29 22:21:44.349: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:44.349: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:44.349: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:45.344: INFO: Wrong image for pod: daemon-set-b9psn. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Apr 29 22:21:45.344: INFO: Pod daemon-set-tg8lj is not available Apr 29 22:21:45.349: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:45.349: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:45.349: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:46.345: INFO: Wrong image for pod: daemon-set-b9psn. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Apr 29 22:21:46.345: INFO: Pod daemon-set-tg8lj is not available Apr 29 22:21:46.350: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:46.350: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:46.350: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:47.343: INFO: Wrong image for pod: daemon-set-b9psn. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Apr 29 22:21:47.343: INFO: Pod daemon-set-tg8lj is not available Apr 29 22:21:47.347: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:47.347: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:47.347: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:48.351: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:48.351: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:48.351: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:49.348: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:49.348: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:49.348: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:50.348: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:50.348: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:50.348: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:51.348: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:51.348: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:51.348: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:52.346: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:52.346: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:52.346: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:53.349: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:53.350: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:53.350: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:54.348: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:54.348: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:54.348: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:55.345: INFO: Pod daemon-set-n4vv2 is not available Apr 29 22:21:55.351: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:55.351: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:55.351: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 29 22:21:55.355: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:55.355: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:55.355: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:55.359: INFO: Number of nodes with available pods: 1 Apr 29 22:21:55.359: INFO: Node node1 is running more than one daemon pod Apr 29 22:21:56.364: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:56.364: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:56.364: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:56.366: INFO: Number of nodes with available pods: 1 Apr 29 22:21:56.366: INFO: Node node1 is running more than one daemon pod Apr 29 22:21:57.365: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:57.365: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:57.365: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:57.368: INFO: Number of nodes with available pods: 1 Apr 29 22:21:57.368: INFO: Node node1 is running more than one daemon pod Apr 29 22:21:58.367: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:58.367: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:58.368: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:58.371: INFO: Number of nodes with available pods: 1 Apr 29 22:21:58.371: INFO: Node node1 is running more than one daemon pod Apr 29 22:21:59.367: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:59.367: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:59.367: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 22:21:59.370: INFO: Number of nodes with available pods: 2 Apr 29 22:21:59.370: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1517, will wait for the garbage collector to delete the pods Apr 29 22:21:59.443: INFO: Deleting DaemonSet.extensions daemon-set took: 5.408142ms Apr 29 22:21:59.543: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.172033ms Apr 29 22:22:05.248: INFO: Number of nodes with available pods: 0 Apr 29 22:22:05.248: INFO: Number of running nodes: 0, number of available pods: 0 Apr 29 22:22:05.250: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"54199"},"items":null} Apr 29 22:22:05.253: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"54199"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:22:05.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1517" for this suite. • [SLOW TEST:30.014 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":17,"completed":13,"skipped":4668,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:22:05.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Apr 29 22:22:05.303: INFO: Waiting up to 1m0s for all nodes to be ready Apr 29 22:23:05.357: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:23:05.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Apr 29 22:23:09.413: INFO: found a healthy node: node2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 29 22:23:23.467: INFO: pods created so far: [1 1 1] Apr 29 22:23:23.467: INFO: length of pods created so far: 3 Apr 29 22:23:41.481: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:23:48.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-7861" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:23:48.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8061" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:103.287 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":17,"completed":14,"skipped":4722,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:23:48.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:24:19.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3701" for this suite. STEP: Destroying namespace "nsdeletetest-3659" for this suite. Apr 29 22:24:19.705: INFO: Namespace nsdeletetest-3659 was already deleted STEP: Destroying namespace "nsdeletetest-429" for this suite. • [SLOW TEST:31.143 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":17,"completed":15,"skipped":5046,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:24:19.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Apr 29 22:24:19.735: INFO: Waiting up to 1m0s for all nodes to be ready Apr 29 22:25:19.789: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Apr 29 22:25:19.816: INFO: Created pod: pod0-sched-preemption-low-priority Apr 29 22:25:19.837: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:25:37.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8330" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:78.209 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":17,"completed":16,"skipped":5087,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 22:25:37.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Apr 29 22:25:37.960: INFO: Waiting up to 1m0s for all nodes to be ready Apr 29 22:26:38.011: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Apr 29 22:26:38.041: INFO: Created pod: pod0-sched-preemption-low-priority Apr 29 22:26:38.060: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 22:27:00.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-6275" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:82.224 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":17,"completed":17,"skipped":5660,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSApr 29 22:27:00.153: INFO: Running AfterSuite actions on all nodes Apr 29 22:27:00.153: INFO: Running AfterSuite actions on node 1 Apr 29 22:27:00.153: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":17,"skipped":5756,"failed":0} Ran 17 of 5773 Specs in 903.356 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5756 Skipped PASS Ginkgo ran 1 suite in 15m4.765712596s Test Suite Passed