I0414 16:50:57.882849 21 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0414 16:50:57.882983 21 e2e.go:124] Starting e2e run "af6f8ff8-387a-4730-b74d-59a982b117ee" on Ginkgo node 1 {"msg":"Test Suite starting","total":14,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1618419056 - Will randomize all specs Will run 14 of 4994 specs Apr 14 16:50:57.943: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:50:57.950: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 14 16:50:57.981: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 14 16:50:58.043: INFO: The status of Pod cmk-init-discover-node1-ppgf5 is Succeeded, skipping waiting Apr 14 16:50:58.043: INFO: The status of Pod cmk-init-discover-node2-tqmv6 is Succeeded, skipping waiting Apr 14 16:50:58.043: INFO: 40 / 43 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 14 16:50:58.043: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Apr 14 16:50:58.043: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 14 16:50:58.051: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Apr 14 16:50:58.051: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Apr 14 16:50:58.051: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Apr 14 16:50:58.051: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 14 16:50:58.051: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Apr 14 16:50:58.051: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Apr 14 16:50:58.051: INFO: e2e test version: v1.18.17 Apr 14 16:50:58.052: INFO: kube-apiserver version: v1.18.8 Apr 14 16:50:58.052: INFO: >>> kubeConfig: /root/.kube/config Apr 14 16:50:58.058: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:50:58.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred Apr 14 16:50:58.085: INFO: Found PodSecurityPolicies; assuming PodSecurityPolicy is enabled. Apr 14 16:50:58.094: INFO: Found ClusterRoles; assuming RBAC is enabled. STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-5482 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 14 16:50:58.200: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 14 16:50:58.212: INFO: Waiting for terminating namespaces to be deleted... Apr 14 16:50:58.215: INFO: Logging pods the kubelet thinks is on node node1 before test Apr 14 16:50:58.228: INFO: kube-multus-ds-amd64-jdgxh from kube-system started at 2021-04-14 15:22:51 +0000 UTC (1 container statuses recorded) Apr 14 16:50:58.228: INFO: Container kube-multus ready: true, restart count 1 Apr 14 16:50:58.228: INFO: node-exporter-zzqpq from monitoring started at 2021-04-14 15:33:00 +0000 UTC (2 container statuses recorded) Apr 14 16:50:58.228: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 14 16:50:58.228: INFO: Container node-exporter ready: true, restart count 0 Apr 14 16:50:58.228: INFO: collectd-sc5nx from monitoring started at 2021-04-14 15:36:31 +0000 UTC (3 container statuses recorded) Apr 14 16:50:58.228: INFO: Container collectd ready: true, restart count 0 Apr 14 16:50:58.228: INFO: Container collectd-exporter ready: true, restart count 0 Apr 14 16:50:58.228: INFO: Container rbac-proxy ready: true, restart count 0 Apr 14 16:50:58.228: INFO: kubernetes-dashboard-57777fbdcb-5tc7z from kube-system started at 2021-04-14 15:23:17 +0000 UTC (1 container statuses recorded) Apr 14 16:50:58.228: INFO: Container kubernetes-dashboard ready: true, restart count 2 Apr 14 16:50:58.228: INFO: cmk-init-discover-node1-ppgf5 from kube-system started at 2021-04-14 15:31:02 +0000 UTC (3 container statuses recorded) Apr 14 16:50:58.228: INFO: Container discover ready: false, restart count 0 Apr 14 16:50:58.228: INFO: Container init ready: false, restart count 0 Apr 14 16:50:58.228: INFO: Container install ready: false, restart count 0 Apr 14 16:50:58.228: INFO: nginx-proxy-node1 from kube-system started at 2021-04-14 15:22:02 +0000 UTC (1 container statuses recorded) Apr 14 16:50:58.228: INFO: Container nginx-proxy ready: true, restart count 1 Apr 14 16:50:58.228: INFO: kube-proxy-6kqs6 from kube-system started at 2021-04-14 15:22:02 +0000 UTC (1 container statuses recorded) Apr 14 16:50:58.229: INFO: Container kube-proxy ready: true, restart count 1 Apr 14 16:50:58.229: INFO: kube-flannel-94jrd from kube-system started at 2021-04-14 15:22:40 +0000 UTC (1 container statuses recorded) Apr 14 16:50:58.229: INFO: Container kube-flannel ready: true, restart count 1 Apr 14 16:50:58.229: INFO: cmk-init-discover-node2-lqbjq from kube-system started at 2021-04-14 15:31:22 +0000 UTC (3 container statuses recorded) Apr 14 16:50:58.229: INFO: Container discover ready: false, restart count 0 Apr 14 16:50:58.229: INFO: Container init ready: false, restart count 0 Apr 14 16:50:58.229: INFO: Container install ready: false, restart count 0 Apr 14 16:50:58.229: INFO: cmk-webhook-888945845-9ctsr from kube-system started at 2021-04-14 15:32:06 +0000 UTC (1 container statuses recorded) Apr 14 16:50:58.229: INFO: Container cmk-webhook ready: true, restart count 0 Apr 14 16:50:58.229: INFO: cmk-d5wr4 from kube-system started at 2021-04-14 15:32:05 +0000 UTC (2 container statuses recorded) Apr 14 16:50:58.229: INFO: Container nodereport ready: true, restart count 0 Apr 14 16:50:58.229: INFO: Container reconcile ready: true, restart count 0 Apr 14 16:50:58.229: INFO: prometheus-k8s-0 from monitoring started at 2021-04-14 15:33:18 +0000 UTC (5 container statuses recorded) Apr 14 16:50:58.229: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 14 16:50:58.229: INFO: Container grafana ready: true, restart count 0 Apr 14 16:50:58.229: INFO: Container prometheus ready: true, restart count 1 Apr 14 16:50:58.229: INFO: Container prometheus-config-reloader ready: true, restart count 0 Apr 14 16:50:58.229: INFO: Container rules-configmap-reloader ready: true, restart count 0 Apr 14 16:50:58.229: INFO: node-feature-discovery-worker-ps9wk from kube-system started at 2021-04-14 15:28:21 +0000 UTC (1 container statuses recorded) Apr 14 16:50:58.229: INFO: Container nfd-worker ready: true, restart count 0 Apr 14 16:50:58.229: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-mlc4d from kube-system started at 2021-04-14 15:29:23 +0000 UTC (1 container statuses recorded) Apr 14 16:50:58.229: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 14 16:50:58.229: INFO: Logging pods the kubelet thinks is on node node2 before test Apr 14 16:50:58.244: INFO: prometheus-operator-f66f5fb4d-w6k89 from monitoring started at 2021-04-14 15:32:53 +0000 UTC (2 container statuses recorded) Apr 14 16:50:58.244: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 14 16:50:58.244: INFO: Container prometheus-operator ready: true, restart count 0 Apr 14 16:50:58.244: INFO: cmk-init-discover-node2-tqmv6 from kube-system started at 2021-04-14 15:31:42 +0000 UTC (3 container statuses recorded) Apr 14 16:50:58.244: INFO: Container discover ready: false, restart count 0 Apr 14 16:50:58.244: INFO: Container init ready: false, restart count 0 Apr 14 16:50:58.244: INFO: Container install ready: false, restart count 0 Apr 14 16:50:58.244: INFO: nginx-proxy-node2 from kube-system started at 2021-04-14 15:22:02 +0000 UTC (1 container statuses recorded) Apr 14 16:50:58.244: INFO: Container nginx-proxy ready: true, restart count 2 Apr 14 16:50:58.244: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-57s5d from kube-system started at 2021-04-14 15:29:23 +0000 UTC (1 container statuses recorded) Apr 14 16:50:58.244: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 14 16:50:58.244: INFO: tas-telemetry-aware-scheduling-5ffb6fd745-89pr4 from monitoring started at 2021-04-14 15:35:55 +0000 UTC (2 container statuses recorded) Apr 14 16:50:58.244: INFO: Container tas-controller ready: true, restart count 0 Apr 14 16:50:58.244: INFO: Container tas-extender ready: true, restart count 0 Apr 14 16:50:58.244: INFO: cmk-5gbnz from kube-system started at 2021-04-14 15:32:06 +0000 UTC (2 container statuses recorded) Apr 14 16:50:58.244: INFO: Container nodereport ready: true, restart count 0 Apr 14 16:50:58.244: INFO: Container reconcile ready: true, restart count 0 Apr 14 16:50:58.244: INFO: kube-flannel-5mrxg from kube-system started at 2021-04-14 15:22:40 +0000 UTC (1 container statuses recorded) Apr 14 16:50:58.244: INFO: Container kube-flannel ready: true, restart count 3 Apr 14 16:50:58.244: INFO: kubernetes-metrics-scraper-54fbb4d595-l4rpk from kube-system started at 2021-04-14 15:23:17 +0000 UTC (1 container statuses recorded) Apr 14 16:50:58.244: INFO: Container kubernetes-metrics-scraper ready: true, restart count 3 Apr 14 16:50:58.244: INFO: kube-proxy-mr5c7 from kube-system started at 2021-04-14 15:22:02 +0000 UTC (1 container statuses recorded) Apr 14 16:50:58.244: INFO: Container kube-proxy ready: true, restart count 2 Apr 14 16:50:58.244: INFO: node-feature-discovery-worker-jx2kp from kube-system started at 2021-04-14 15:28:21 +0000 UTC (1 container statuses recorded) Apr 14 16:50:58.244: INFO: Container nfd-worker ready: true, restart count 0 Apr 14 16:50:58.244: INFO: collectd-l2bgc from monitoring started at 2021-04-14 15:36:31 +0000 UTC (3 container statuses recorded) Apr 14 16:50:58.244: INFO: Container collectd ready: true, restart count 0 Apr 14 16:50:58.244: INFO: Container collectd-exporter ready: true, restart count 0 Apr 14 16:50:58.244: INFO: Container rbac-proxy ready: true, restart count 0 Apr 14 16:50:58.244: INFO: kube-multus-ds-amd64-2ptgq from kube-system started at 2021-04-14 15:22:51 +0000 UTC (1 container statuses recorded) Apr 14 16:50:58.244: INFO: Container kube-multus ready: true, restart count 2 Apr 14 16:50:58.245: INFO: node-exporter-pdn2v from monitoring started at 2021-04-14 15:33:00 +0000 UTC (2 container statuses recorded) Apr 14 16:50:58.245: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 14 16:50:58.245: INFO: Container node-exporter ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e56d7fc7-4479-4fb2-bfbf-763877961aef 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-e56d7fc7-4479-4fb2-bfbf-763877961aef off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-e56d7fc7-4479-4fb2-bfbf-763877961aef [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:51:14.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5482" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:16.297 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":14,"completed":1,"skipped":332,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:51:14.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in emptydir-wrapper-157 STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 14 16:51:14.693: INFO: Pod name wrapped-volume-race-2a54d7b7-b3cc-49d3-a74c-98e97986b3d2: Found 2 pods out of 5 Apr 14 16:51:19.700: INFO: Pod name wrapped-volume-race-2a54d7b7-b3cc-49d3-a74c-98e97986b3d2: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2a54d7b7-b3cc-49d3-a74c-98e97986b3d2 in namespace emptydir-wrapper-157, will wait for the garbage collector to delete the pods Apr 14 16:51:33.779: INFO: Deleting ReplicationController wrapped-volume-race-2a54d7b7-b3cc-49d3-a74c-98e97986b3d2 took: 6.939069ms Apr 14 16:51:34.380: INFO: Terminating ReplicationController wrapped-volume-race-2a54d7b7-b3cc-49d3-a74c-98e97986b3d2 pods took: 600.273906ms STEP: Creating RC which spawns configmap-volume pods Apr 14 16:51:49.295: INFO: Pod name wrapped-volume-race-9fbe8ef4-96e9-4461-b1d3-9e6815a20b44: Found 0 pods out of 5 Apr 14 16:51:54.303: INFO: Pod name wrapped-volume-race-9fbe8ef4-96e9-4461-b1d3-9e6815a20b44: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9fbe8ef4-96e9-4461-b1d3-9e6815a20b44 in namespace emptydir-wrapper-157, will wait for the garbage collector to delete the pods Apr 14 16:52:10.384: INFO: Deleting ReplicationController wrapped-volume-race-9fbe8ef4-96e9-4461-b1d3-9e6815a20b44 took: 6.323144ms Apr 14 16:52:10.985: INFO: Terminating ReplicationController wrapped-volume-race-9fbe8ef4-96e9-4461-b1d3-9e6815a20b44 pods took: 600.625321ms STEP: Creating RC which spawns configmap-volume pods Apr 14 16:52:19.303: INFO: Pod name wrapped-volume-race-cf12c0aa-8e55-4d38-abc9-aa8ee8a9de26: Found 0 pods out of 5 Apr 14 16:52:24.310: INFO: Pod name wrapped-volume-race-cf12c0aa-8e55-4d38-abc9-aa8ee8a9de26: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-cf12c0aa-8e55-4d38-abc9-aa8ee8a9de26 in namespace emptydir-wrapper-157, will wait for the garbage collector to delete the pods Apr 14 16:52:38.389: INFO: Deleting ReplicationController wrapped-volume-race-cf12c0aa-8e55-4d38-abc9-aa8ee8a9de26 took: 5.52753ms Apr 14 16:52:38.989: INFO: Terminating ReplicationController wrapped-volume-race-cf12c0aa-8e55-4d38-abc9-aa8ee8a9de26 pods took: 600.618734ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:52:49.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-157" for this suite. • [SLOW TEST:95.143 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":14,"completed":2,"skipped":776,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:52:49.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-9853 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 14 16:52:49.646: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 14 16:52:49.659: INFO: Waiting for terminating namespaces to be deleted... Apr 14 16:52:49.661: INFO: Logging pods the kubelet thinks is on node node1 before test Apr 14 16:52:49.684: INFO: nginx-proxy-node1 from kube-system started at 2021-04-14 15:22:02 +0000 UTC (1 container statuses recorded) Apr 14 16:52:49.684: INFO: Container nginx-proxy ready: true, restart count 1 Apr 14 16:52:49.684: INFO: kube-proxy-6kqs6 from kube-system started at 2021-04-14 15:22:02 +0000 UTC (1 container statuses recorded) Apr 14 16:52:49.684: INFO: Container kube-proxy ready: true, restart count 1 Apr 14 16:52:49.684: INFO: kube-flannel-94jrd from kube-system started at 2021-04-14 15:22:40 +0000 UTC (1 container statuses recorded) Apr 14 16:52:49.684: INFO: Container kube-flannel ready: true, restart count 1 Apr 14 16:52:49.684: INFO: cmk-init-discover-node2-lqbjq from kube-system started at 2021-04-14 15:31:22 +0000 UTC (3 container statuses recorded) Apr 14 16:52:49.684: INFO: Container discover ready: false, restart count 0 Apr 14 16:52:49.684: INFO: Container init ready: false, restart count 0 Apr 14 16:52:49.684: INFO: Container install ready: false, restart count 0 Apr 14 16:52:49.684: INFO: cmk-webhook-888945845-9ctsr from kube-system started at 2021-04-14 15:32:06 +0000 UTC (1 container statuses recorded) Apr 14 16:52:49.684: INFO: Container cmk-webhook ready: true, restart count 0 Apr 14 16:52:49.684: INFO: cmk-d5wr4 from kube-system started at 2021-04-14 15:32:05 +0000 UTC (2 container statuses recorded) Apr 14 16:52:49.684: INFO: Container nodereport ready: true, restart count 0 Apr 14 16:52:49.684: INFO: Container reconcile ready: true, restart count 0 Apr 14 16:52:49.684: INFO: prometheus-k8s-0 from monitoring started at 2021-04-14 15:33:18 +0000 UTC (5 container statuses recorded) Apr 14 16:52:49.684: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 14 16:52:49.684: INFO: Container grafana ready: true, restart count 0 Apr 14 16:52:49.684: INFO: Container prometheus ready: true, restart count 1 Apr 14 16:52:49.684: INFO: Container prometheus-config-reloader ready: true, restart count 0 Apr 14 16:52:49.684: INFO: Container rules-configmap-reloader ready: true, restart count 0 Apr 14 16:52:49.684: INFO: node-feature-discovery-worker-ps9wk from kube-system started at 2021-04-14 15:28:21 +0000 UTC (1 container statuses recorded) Apr 14 16:52:49.684: INFO: Container nfd-worker ready: true, restart count 0 Apr 14 16:52:49.684: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-mlc4d from kube-system started at 2021-04-14 15:29:23 +0000 UTC (1 container statuses recorded) Apr 14 16:52:49.684: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 14 16:52:49.684: INFO: kube-multus-ds-amd64-jdgxh from kube-system started at 2021-04-14 15:22:51 +0000 UTC (1 container statuses recorded) Apr 14 16:52:49.684: INFO: Container kube-multus ready: true, restart count 1 Apr 14 16:52:49.684: INFO: node-exporter-zzqpq from monitoring started at 2021-04-14 15:33:00 +0000 UTC (2 container statuses recorded) Apr 14 16:52:49.684: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 14 16:52:49.684: INFO: Container node-exporter ready: true, restart count 0 Apr 14 16:52:49.684: INFO: collectd-sc5nx from monitoring started at 2021-04-14 15:36:31 +0000 UTC (3 container statuses recorded) Apr 14 16:52:49.684: INFO: Container collectd ready: true, restart count 0 Apr 14 16:52:49.684: INFO: Container collectd-exporter ready: true, restart count 0 Apr 14 16:52:49.684: INFO: Container rbac-proxy ready: true, restart count 0 Apr 14 16:52:49.685: INFO: kubernetes-dashboard-57777fbdcb-5tc7z from kube-system started at 2021-04-14 15:23:17 +0000 UTC (1 container statuses recorded) Apr 14 16:52:49.685: INFO: Container kubernetes-dashboard ready: true, restart count 2 Apr 14 16:52:49.685: INFO: cmk-init-discover-node1-ppgf5 from kube-system started at 2021-04-14 15:31:02 +0000 UTC (3 container statuses recorded) Apr 14 16:52:49.685: INFO: Container discover ready: false, restart count 0 Apr 14 16:52:49.685: INFO: Container init ready: false, restart count 0 Apr 14 16:52:49.685: INFO: Container install ready: false, restart count 0 Apr 14 16:52:49.685: INFO: Logging pods the kubelet thinks is on node node2 before test Apr 14 16:52:49.700: INFO: kube-flannel-5mrxg from kube-system started at 2021-04-14 15:22:40 +0000 UTC (1 container statuses recorded) Apr 14 16:52:49.700: INFO: Container kube-flannel ready: true, restart count 3 Apr 14 16:52:49.700: INFO: kubernetes-metrics-scraper-54fbb4d595-l4rpk from kube-system started at 2021-04-14 15:23:17 +0000 UTC (1 container statuses recorded) Apr 14 16:52:49.700: INFO: Container kubernetes-metrics-scraper ready: true, restart count 3 Apr 14 16:52:49.700: INFO: kube-proxy-mr5c7 from kube-system started at 2021-04-14 15:22:02 +0000 UTC (1 container statuses recorded) Apr 14 16:52:49.700: INFO: Container kube-proxy ready: true, restart count 2 Apr 14 16:52:49.700: INFO: node-feature-discovery-worker-jx2kp from kube-system started at 2021-04-14 15:28:21 +0000 UTC (1 container statuses recorded) Apr 14 16:52:49.700: INFO: Container nfd-worker ready: true, restart count 0 Apr 14 16:52:49.700: INFO: collectd-l2bgc from monitoring started at 2021-04-14 15:36:31 +0000 UTC (3 container statuses recorded) Apr 14 16:52:49.700: INFO: Container collectd ready: true, restart count 0 Apr 14 16:52:49.700: INFO: Container collectd-exporter ready: true, restart count 0 Apr 14 16:52:49.700: INFO: Container rbac-proxy ready: true, restart count 0 Apr 14 16:52:49.700: INFO: kube-multus-ds-amd64-2ptgq from kube-system started at 2021-04-14 15:22:51 +0000 UTC (1 container statuses recorded) Apr 14 16:52:49.700: INFO: Container kube-multus ready: true, restart count 2 Apr 14 16:52:49.700: INFO: node-exporter-pdn2v from monitoring started at 2021-04-14 15:33:00 +0000 UTC (2 container statuses recorded) Apr 14 16:52:49.700: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 14 16:52:49.700: INFO: Container node-exporter ready: true, restart count 0 Apr 14 16:52:49.700: INFO: prometheus-operator-f66f5fb4d-w6k89 from monitoring started at 2021-04-14 15:32:53 +0000 UTC (2 container statuses recorded) Apr 14 16:52:49.700: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 14 16:52:49.700: INFO: Container prometheus-operator ready: true, restart count 0 Apr 14 16:52:49.700: INFO: cmk-init-discover-node2-tqmv6 from kube-system started at 2021-04-14 15:31:42 +0000 UTC (3 container statuses recorded) Apr 14 16:52:49.700: INFO: Container discover ready: false, restart count 0 Apr 14 16:52:49.701: INFO: Container init ready: false, restart count 0 Apr 14 16:52:49.701: INFO: Container install ready: false, restart count 0 Apr 14 16:52:49.701: INFO: nginx-proxy-node2 from kube-system started at 2021-04-14 15:22:02 +0000 UTC (1 container statuses recorded) Apr 14 16:52:49.701: INFO: Container nginx-proxy ready: true, restart count 2 Apr 14 16:52:49.701: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-57s5d from kube-system started at 2021-04-14 15:29:23 +0000 UTC (1 container statuses recorded) Apr 14 16:52:49.701: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 14 16:52:49.701: INFO: tas-telemetry-aware-scheduling-5ffb6fd745-89pr4 from monitoring started at 2021-04-14 15:35:55 +0000 UTC (2 container statuses recorded) Apr 14 16:52:49.701: INFO: Container tas-controller ready: true, restart count 0 Apr 14 16:52:49.701: INFO: Container tas-extender ready: true, restart count 0 Apr 14 16:52:49.701: INFO: cmk-5gbnz from kube-system started at 2021-04-14 15:32:06 +0000 UTC (2 container statuses recorded) Apr 14 16:52:49.701: INFO: Container nodereport ready: true, restart count 0 Apr 14 16:52:49.701: INFO: Container reconcile ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-95e0887d-c3ba-42a0-9590-34f441b7f944 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-95e0887d-c3ba-42a0-9590-34f441b7f944 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-95e0887d-c3ba-42a0-9590-34f441b7f944 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:57:57.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9853" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.270 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":14,"completed":3,"skipped":1213,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:57:57.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-7465 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 14 16:57:57.915: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 14 16:57:57.937: INFO: Waiting for terminating namespaces to be deleted... Apr 14 16:57:57.940: INFO: Logging pods the kubelet thinks is on node node1 before test Apr 14 16:57:57.977: INFO: node-feature-discovery-worker-ps9wk from kube-system started at 2021-04-14 15:28:21 +0000 UTC (1 container statuses recorded) Apr 14 16:57:57.978: INFO: Container nfd-worker ready: true, restart count 0 Apr 14 16:57:57.978: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-mlc4d from kube-system started at 2021-04-14 15:29:23 +0000 UTC (1 container statuses recorded) Apr 14 16:57:57.978: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 14 16:57:57.978: INFO: kube-multus-ds-amd64-jdgxh from kube-system started at 2021-04-14 15:22:51 +0000 UTC (1 container statuses recorded) Apr 14 16:57:57.978: INFO: Container kube-multus ready: true, restart count 1 Apr 14 16:57:57.978: INFO: node-exporter-zzqpq from monitoring started at 2021-04-14 15:33:00 +0000 UTC (2 container statuses recorded) Apr 14 16:57:57.978: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 14 16:57:57.978: INFO: Container node-exporter ready: true, restart count 0 Apr 14 16:57:57.978: INFO: collectd-sc5nx from monitoring started at 2021-04-14 15:36:31 +0000 UTC (3 container statuses recorded) Apr 14 16:57:57.978: INFO: Container collectd ready: true, restart count 0 Apr 14 16:57:57.978: INFO: Container collectd-exporter ready: true, restart count 0 Apr 14 16:57:57.978: INFO: Container rbac-proxy ready: true, restart count 0 Apr 14 16:57:57.978: INFO: kubernetes-dashboard-57777fbdcb-5tc7z from kube-system started at 2021-04-14 15:23:17 +0000 UTC (1 container statuses recorded) Apr 14 16:57:57.978: INFO: Container kubernetes-dashboard ready: true, restart count 2 Apr 14 16:57:57.978: INFO: cmk-init-discover-node1-ppgf5 from kube-system started at 2021-04-14 15:31:02 +0000 UTC (3 container statuses recorded) Apr 14 16:57:57.978: INFO: Container discover ready: false, restart count 0 Apr 14 16:57:57.978: INFO: Container init ready: false, restart count 0 Apr 14 16:57:57.978: INFO: Container install ready: false, restart count 0 Apr 14 16:57:57.978: INFO: nginx-proxy-node1 from kube-system started at 2021-04-14 15:22:02 +0000 UTC (1 container statuses recorded) Apr 14 16:57:57.978: INFO: Container nginx-proxy ready: true, restart count 1 Apr 14 16:57:57.978: INFO: kube-proxy-6kqs6 from kube-system started at 2021-04-14 15:22:02 +0000 UTC (1 container statuses recorded) Apr 14 16:57:57.978: INFO: Container kube-proxy ready: true, restart count 1 Apr 14 16:57:57.978: INFO: kube-flannel-94jrd from kube-system started at 2021-04-14 15:22:40 +0000 UTC (1 container statuses recorded) Apr 14 16:57:57.978: INFO: Container kube-flannel ready: true, restart count 1 Apr 14 16:57:57.978: INFO: cmk-init-discover-node2-lqbjq from kube-system started at 2021-04-14 15:31:22 +0000 UTC (3 container statuses recorded) Apr 14 16:57:57.978: INFO: Container discover ready: false, restart count 0 Apr 14 16:57:57.978: INFO: Container init ready: false, restart count 0 Apr 14 16:57:57.978: INFO: Container install ready: false, restart count 0 Apr 14 16:57:57.978: INFO: cmk-webhook-888945845-9ctsr from kube-system started at 2021-04-14 15:32:06 +0000 UTC (1 container statuses recorded) Apr 14 16:57:57.978: INFO: Container cmk-webhook ready: true, restart count 0 Apr 14 16:57:57.978: INFO: cmk-d5wr4 from kube-system started at 2021-04-14 15:32:05 +0000 UTC (2 container statuses recorded) Apr 14 16:57:57.978: INFO: Container nodereport ready: true, restart count 0 Apr 14 16:57:57.978: INFO: Container reconcile ready: true, restart count 0 Apr 14 16:57:57.978: INFO: prometheus-k8s-0 from monitoring started at 2021-04-14 15:33:18 +0000 UTC (5 container statuses recorded) Apr 14 16:57:57.978: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 14 16:57:57.978: INFO: Container grafana ready: true, restart count 0 Apr 14 16:57:57.978: INFO: Container prometheus ready: true, restart count 1 Apr 14 16:57:57.978: INFO: Container prometheus-config-reloader ready: true, restart count 0 Apr 14 16:57:57.978: INFO: Container rules-configmap-reloader ready: true, restart count 0 Apr 14 16:57:57.978: INFO: Logging pods the kubelet thinks is on node node2 before test Apr 14 16:57:57.996: INFO: node-feature-discovery-worker-jx2kp from kube-system started at 2021-04-14 15:28:21 +0000 UTC (1 container statuses recorded) Apr 14 16:57:57.996: INFO: Container nfd-worker ready: true, restart count 0 Apr 14 16:57:57.996: INFO: kube-proxy-mr5c7 from kube-system started at 2021-04-14 15:22:02 +0000 UTC (1 container statuses recorded) Apr 14 16:57:57.996: INFO: Container kube-proxy ready: true, restart count 2 Apr 14 16:57:57.996: INFO: node-exporter-pdn2v from monitoring started at 2021-04-14 15:33:00 +0000 UTC (2 container statuses recorded) Apr 14 16:57:57.996: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 14 16:57:57.996: INFO: Container node-exporter ready: true, restart count 0 Apr 14 16:57:57.996: INFO: collectd-l2bgc from monitoring started at 2021-04-14 15:36:31 +0000 UTC (3 container statuses recorded) Apr 14 16:57:57.996: INFO: Container collectd ready: true, restart count 0 Apr 14 16:57:57.996: INFO: Container collectd-exporter ready: true, restart count 0 Apr 14 16:57:57.996: INFO: Container rbac-proxy ready: true, restart count 0 Apr 14 16:57:57.996: INFO: pod4 from sched-pred-9853 started at 2021-04-14 16:52:53 +0000 UTC (1 container statuses recorded) Apr 14 16:57:57.996: INFO: Container pod4 ready: true, restart count 0 Apr 14 16:57:57.996: INFO: kube-multus-ds-amd64-2ptgq from kube-system started at 2021-04-14 15:22:51 +0000 UTC (1 container statuses recorded) Apr 14 16:57:57.996: INFO: Container kube-multus ready: true, restart count 2 Apr 14 16:57:57.996: INFO: prometheus-operator-f66f5fb4d-w6k89 from monitoring started at 2021-04-14 15:32:53 +0000 UTC (2 container statuses recorded) Apr 14 16:57:57.996: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 14 16:57:57.996: INFO: Container prometheus-operator ready: true, restart count 0 Apr 14 16:57:57.996: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-57s5d from kube-system started at 2021-04-14 15:29:23 +0000 UTC (1 container statuses recorded) Apr 14 16:57:57.996: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 14 16:57:57.996: INFO: cmk-init-discover-node2-tqmv6 from kube-system started at 2021-04-14 15:31:42 +0000 UTC (3 container statuses recorded) Apr 14 16:57:57.996: INFO: Container discover ready: false, restart count 0 Apr 14 16:57:57.996: INFO: Container init ready: false, restart count 0 Apr 14 16:57:57.996: INFO: Container install ready: false, restart count 0 Apr 14 16:57:57.996: INFO: nginx-proxy-node2 from kube-system started at 2021-04-14 15:22:02 +0000 UTC (1 container statuses recorded) Apr 14 16:57:57.996: INFO: Container nginx-proxy ready: true, restart count 2 Apr 14 16:57:57.996: INFO: tas-telemetry-aware-scheduling-5ffb6fd745-89pr4 from monitoring started at 2021-04-14 15:35:55 +0000 UTC (2 container statuses recorded) Apr 14 16:57:57.996: INFO: Container tas-controller ready: true, restart count 0 Apr 14 16:57:57.996: INFO: Container tas-extender ready: true, restart count 0 Apr 14 16:57:57.996: INFO: cmk-5gbnz from kube-system started at 2021-04-14 15:32:06 +0000 UTC (2 container statuses recorded) Apr 14 16:57:57.996: INFO: Container nodereport ready: true, restart count 0 Apr 14 16:57:57.996: INFO: Container reconcile ready: true, restart count 0 Apr 14 16:57:57.996: INFO: kubernetes-metrics-scraper-54fbb4d595-l4rpk from kube-system started at 2021-04-14 15:23:17 +0000 UTC (1 container statuses recorded) Apr 14 16:57:57.996: INFO: Container kubernetes-metrics-scraper ready: true, restart count 3 Apr 14 16:57:57.996: INFO: kube-flannel-5mrxg from kube-system started at 2021-04-14 15:22:40 +0000 UTC (1 container statuses recorded) Apr 14 16:57:57.996: INFO: Container kube-flannel ready: true, restart count 3 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node node1 STEP: verifying the node has the label node node2 Apr 14 16:58:04.104: INFO: Pod cmk-5gbnz requesting resource cpu=0m on Node node2 Apr 14 16:58:04.104: INFO: Pod cmk-d5wr4 requesting resource cpu=0m on Node node1 Apr 14 16:58:04.104: INFO: Pod cmk-webhook-888945845-9ctsr requesting resource cpu=0m on Node node1 Apr 14 16:58:04.104: INFO: Pod kube-flannel-5mrxg requesting resource cpu=150m on Node node2 Apr 14 16:58:04.104: INFO: Pod kube-flannel-94jrd requesting resource cpu=150m on Node node1 Apr 14 16:58:04.104: INFO: Pod kube-multus-ds-amd64-2ptgq requesting resource cpu=100m on Node node2 Apr 14 16:58:04.104: INFO: Pod kube-multus-ds-amd64-jdgxh requesting resource cpu=100m on Node node1 Apr 14 16:58:04.104: INFO: Pod kube-proxy-6kqs6 requesting resource cpu=0m on Node node1 Apr 14 16:58:04.104: INFO: Pod kube-proxy-mr5c7 requesting resource cpu=0m on Node node2 Apr 14 16:58:04.104: INFO: Pod kubernetes-dashboard-57777fbdcb-5tc7z requesting resource cpu=50m on Node node1 Apr 14 16:58:04.104: INFO: Pod kubernetes-metrics-scraper-54fbb4d595-l4rpk requesting resource cpu=0m on Node node2 Apr 14 16:58:04.104: INFO: Pod nginx-proxy-node1 requesting resource cpu=25m on Node node1 Apr 14 16:58:04.104: INFO: Pod nginx-proxy-node2 requesting resource cpu=25m on Node node2 Apr 14 16:58:04.104: INFO: Pod node-feature-discovery-worker-jx2kp requesting resource cpu=0m on Node node2 Apr 14 16:58:04.104: INFO: Pod node-feature-discovery-worker-ps9wk requesting resource cpu=0m on Node node1 Apr 14 16:58:04.104: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-57s5d requesting resource cpu=0m on Node node2 Apr 14 16:58:04.104: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-mlc4d requesting resource cpu=0m on Node node1 Apr 14 16:58:04.104: INFO: Pod collectd-l2bgc requesting resource cpu=0m on Node node2 Apr 14 16:58:04.104: INFO: Pod collectd-sc5nx requesting resource cpu=0m on Node node1 Apr 14 16:58:04.104: INFO: Pod node-exporter-pdn2v requesting resource cpu=112m on Node node2 Apr 14 16:58:04.104: INFO: Pod node-exporter-zzqpq requesting resource cpu=112m on Node node1 Apr 14 16:58:04.104: INFO: Pod prometheus-k8s-0 requesting resource cpu=300m on Node node1 Apr 14 16:58:04.104: INFO: Pod prometheus-operator-f66f5fb4d-w6k89 requesting resource cpu=100m on Node node2 Apr 14 16:58:04.104: INFO: Pod tas-telemetry-aware-scheduling-5ffb6fd745-89pr4 requesting resource cpu=0m on Node node2 Apr 14 16:58:04.104: INFO: Pod pod4 requesting resource cpu=0m on Node node2 STEP: Starting Pods to consume most of the cluster CPU. Apr 14 16:58:04.104: INFO: Creating a pod which consumes cpu=53384m on Node node1 Apr 14 16:58:04.116: INFO: Creating a pod which consumes cpu=53559m on Node node2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-28934a26-d4b6-4081-b203-90b7895800e4.1675c7f230f47bdc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7465/filler-pod-28934a26-d4b6-4081-b203-90b7895800e4 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-28934a26-d4b6-4081-b203-90b7895800e4.1675c7f281fd5a36], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.220/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-28934a26-d4b6-4081-b203-90b7895800e4.1675c7f282ba5b3e], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [filler-pod-28934a26-d4b6-4081-b203-90b7895800e4.1675c7f2a11ec8e4], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [filler-pod-28934a26-d4b6-4081-b203-90b7895800e4.1675c7f2a77c24a6], Reason = [Created], Message = [Created container filler-pod-28934a26-d4b6-4081-b203-90b7895800e4] STEP: Considering event: Type = [Normal], Name = [filler-pod-28934a26-d4b6-4081-b203-90b7895800e4.1675c7f2ad238fee], Reason = [Started], Message = [Started container filler-pod-28934a26-d4b6-4081-b203-90b7895800e4] STEP: Considering event: Type = [Normal], Name = [filler-pod-f98ed865-4531-475f-885d-416043641413.1675c7f230544af0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7465/filler-pod-f98ed865-4531-475f-885d-416043641413 to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-f98ed865-4531-475f-885d-416043641413.1675c7f28cba0d80], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.205/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-f98ed865-4531-475f-885d-416043641413.1675c7f28d84d0a9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [filler-pod-f98ed865-4531-475f-885d-416043641413.1675c7f2aca94363], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [filler-pod-f98ed865-4531-475f-885d-416043641413.1675c7f2b48e9889], Reason = [Created], Message = [Created container filler-pod-f98ed865-4531-475f-885d-416043641413] STEP: Considering event: Type = [Normal], Name = [filler-pod-f98ed865-4531-475f-885d-416043641413.1675c7f2ba2d5aa4], Reason = [Started], Message = [Started container filler-pod-f98ed865-4531-475f-885d-416043641413] STEP: Considering event: Type = [Warning], Name = [additional-pod.1675c7f320d98f9d], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Warning], Name = [additional-pod.1675c7f321257b09], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: removing the label node off the node node1 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node node2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:58:09.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7465" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:11.395 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":14,"completed":4,"skipped":1359,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:58:09.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-3734 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 14 16:58:09.344: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:09.344: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:09.345: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:09.347: INFO: Number of nodes with available pods: 0 Apr 14 16:58:09.347: INFO: Node node1 is running more than one daemon pod Apr 14 16:58:10.352: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:10.352: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:10.352: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:10.355: INFO: Number of nodes with available pods: 0 Apr 14 16:58:10.355: INFO: Node node1 is running more than one daemon pod Apr 14 16:58:11.353: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:11.353: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:11.353: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:11.356: INFO: Number of nodes with available pods: 0 Apr 14 16:58:11.356: INFO: Node node1 is running more than one daemon pod Apr 14 16:58:12.352: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:12.352: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:12.352: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:12.355: INFO: Number of nodes with available pods: 0 Apr 14 16:58:12.355: INFO: Node node1 is running more than one daemon pod Apr 14 16:58:13.352: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:13.352: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:13.352: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:13.355: INFO: Number of nodes with available pods: 0 Apr 14 16:58:13.355: INFO: Node node1 is running more than one daemon pod Apr 14 16:58:14.351: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:14.351: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:14.351: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:14.356: INFO: Number of nodes with available pods: 2 Apr 14 16:58:14.356: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 14 16:58:14.369: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:14.369: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:14.369: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:14.371: INFO: Number of nodes with available pods: 1 Apr 14 16:58:14.371: INFO: Node node2 is running more than one daemon pod Apr 14 16:58:15.380: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:15.380: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:15.380: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:15.383: INFO: Number of nodes with available pods: 1 Apr 14 16:58:15.383: INFO: Node node2 is running more than one daemon pod Apr 14 16:58:16.378: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:16.378: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:16.378: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:16.381: INFO: Number of nodes with available pods: 1 Apr 14 16:58:16.381: INFO: Node node2 is running more than one daemon pod Apr 14 16:58:17.378: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:17.378: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:17.378: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:17.380: INFO: Number of nodes with available pods: 1 Apr 14 16:58:17.380: INFO: Node node2 is running more than one daemon pod Apr 14 16:58:18.379: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:18.379: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:18.379: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:18.381: INFO: Number of nodes with available pods: 1 Apr 14 16:58:18.381: INFO: Node node2 is running more than one daemon pod Apr 14 16:58:19.378: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:19.378: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:19.378: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:19.381: INFO: Number of nodes with available pods: 1 Apr 14 16:58:19.381: INFO: Node node2 is running more than one daemon pod Apr 14 16:58:20.377: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:20.377: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:20.377: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:20.380: INFO: Number of nodes with available pods: 1 Apr 14 16:58:20.380: INFO: Node node2 is running more than one daemon pod Apr 14 16:58:21.378: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:21.378: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:21.378: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:21.381: INFO: Number of nodes with available pods: 1 Apr 14 16:58:21.381: INFO: Node node2 is running more than one daemon pod Apr 14 16:58:22.377: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:22.377: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:22.377: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:22.380: INFO: Number of nodes with available pods: 1 Apr 14 16:58:22.380: INFO: Node node2 is running more than one daemon pod Apr 14 16:58:23.378: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:23.378: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:23.378: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:23.382: INFO: Number of nodes with available pods: 1 Apr 14 16:58:23.382: INFO: Node node2 is running more than one daemon pod Apr 14 16:58:24.378: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:24.378: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:24.379: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:24.381: INFO: Number of nodes with available pods: 1 Apr 14 16:58:24.381: INFO: Node node2 is running more than one daemon pod Apr 14 16:58:25.376: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:25.376: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:25.376: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:25.379: INFO: Number of nodes with available pods: 1 Apr 14 16:58:25.379: INFO: Node node2 is running more than one daemon pod Apr 14 16:58:26.379: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:26.379: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:26.379: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:26.383: INFO: Number of nodes with available pods: 1 Apr 14 16:58:26.383: INFO: Node node2 is running more than one daemon pod Apr 14 16:58:27.378: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:27.378: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:27.378: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:27.381: INFO: Number of nodes with available pods: 1 Apr 14 16:58:27.381: INFO: Node node2 is running more than one daemon pod Apr 14 16:58:28.380: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:28.380: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:28.380: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:28.383: INFO: Number of nodes with available pods: 1 Apr 14 16:58:28.383: INFO: Node node2 is running more than one daemon pod Apr 14 16:58:29.379: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:29.379: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:29.379: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:29.381: INFO: Number of nodes with available pods: 1 Apr 14 16:58:29.381: INFO: Node node2 is running more than one daemon pod Apr 14 16:58:30.378: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:30.378: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:30.378: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:30.381: INFO: Number of nodes with available pods: 1 Apr 14 16:58:30.381: INFO: Node node2 is running more than one daemon pod Apr 14 16:58:31.378: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:31.378: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:31.378: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:31.381: INFO: Number of nodes with available pods: 1 Apr 14 16:58:31.381: INFO: Node node2 is running more than one daemon pod Apr 14 16:58:32.378: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:32.378: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:32.378: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:32.381: INFO: Number of nodes with available pods: 1 Apr 14 16:58:32.381: INFO: Node node2 is running more than one daemon pod Apr 14 16:58:33.379: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:33.379: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:33.379: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:33.383: INFO: Number of nodes with available pods: 2 Apr 14 16:58:33.383: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3734, will wait for the garbage collector to delete the pods Apr 14 16:58:33.444: INFO: Deleting DaemonSet.extensions daemon-set took: 6.513257ms Apr 14 16:58:34.044: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.517628ms Apr 14 16:58:39.248: INFO: Number of nodes with available pods: 0 Apr 14 16:58:39.248: INFO: Number of running nodes: 0, number of available pods: 0 Apr 14 16:58:39.254: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3734/daemonsets","resourceVersion":"42285"},"items":null} Apr 14 16:58:39.257: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3734/pods","resourceVersion":"42285"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:58:39.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3734" for this suite. • [SLOW TEST:30.076 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":14,"completed":5,"skipped":2024,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:58:39.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-3197 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 14 16:58:39.414: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 14 16:58:39.421: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:39.421: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:39.421: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:39.423: INFO: Number of nodes with available pods: 0 Apr 14 16:58:39.423: INFO: Node node1 is running more than one daemon pod Apr 14 16:58:40.431: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:40.431: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:40.431: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:40.434: INFO: Number of nodes with available pods: 0 Apr 14 16:58:40.434: INFO: Node node1 is running more than one daemon pod Apr 14 16:58:41.429: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:41.429: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:41.429: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:41.432: INFO: Number of nodes with available pods: 0 Apr 14 16:58:41.432: INFO: Node node1 is running more than one daemon pod Apr 14 16:58:42.430: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:42.430: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:42.430: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:42.434: INFO: Number of nodes with available pods: 0 Apr 14 16:58:42.434: INFO: Node node1 is running more than one daemon pod Apr 14 16:58:43.430: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:43.430: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:43.430: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:43.433: INFO: Number of nodes with available pods: 2 Apr 14 16:58:43.433: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 14 16:58:43.455: INFO: Wrong image for pod: daemon-set-dd88v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:43.455: INFO: Wrong image for pod: daemon-set-ksv5c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:43.459: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:43.459: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:43.459: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:44.465: INFO: Wrong image for pod: daemon-set-dd88v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:44.465: INFO: Wrong image for pod: daemon-set-ksv5c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:44.469: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:44.469: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:44.469: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:45.464: INFO: Wrong image for pod: daemon-set-dd88v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:45.464: INFO: Wrong image for pod: daemon-set-ksv5c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:45.468: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:45.468: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:45.468: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:46.465: INFO: Wrong image for pod: daemon-set-dd88v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:46.465: INFO: Pod daemon-set-dd88v is not available Apr 14 16:58:46.465: INFO: Wrong image for pod: daemon-set-ksv5c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:46.469: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:46.469: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:46.469: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:47.464: INFO: Wrong image for pod: daemon-set-dd88v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:47.464: INFO: Pod daemon-set-dd88v is not available Apr 14 16:58:47.464: INFO: Wrong image for pod: daemon-set-ksv5c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:47.468: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:47.468: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:47.468: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:48.465: INFO: Wrong image for pod: daemon-set-dd88v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:48.465: INFO: Pod daemon-set-dd88v is not available Apr 14 16:58:48.465: INFO: Wrong image for pod: daemon-set-ksv5c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:48.470: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:48.470: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:48.470: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:49.464: INFO: Wrong image for pod: daemon-set-ksv5c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:49.464: INFO: Pod daemon-set-v84lh is not available Apr 14 16:58:49.469: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:49.469: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:49.469: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:50.465: INFO: Wrong image for pod: daemon-set-ksv5c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:50.465: INFO: Pod daemon-set-v84lh is not available Apr 14 16:58:50.469: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:50.469: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:50.469: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:51.465: INFO: Wrong image for pod: daemon-set-ksv5c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:51.465: INFO: Pod daemon-set-v84lh is not available Apr 14 16:58:51.469: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:51.469: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:51.469: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:52.465: INFO: Wrong image for pod: daemon-set-ksv5c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:52.470: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:52.470: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:52.470: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:53.466: INFO: Wrong image for pod: daemon-set-ksv5c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:53.471: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:53.471: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:53.471: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:54.465: INFO: Wrong image for pod: daemon-set-ksv5c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:54.465: INFO: Pod daemon-set-ksv5c is not available Apr 14 16:58:54.470: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:54.470: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:54.470: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:55.465: INFO: Wrong image for pod: daemon-set-ksv5c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:55.465: INFO: Pod daemon-set-ksv5c is not available Apr 14 16:58:55.469: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:55.469: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:55.469: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:56.465: INFO: Wrong image for pod: daemon-set-ksv5c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:56.465: INFO: Pod daemon-set-ksv5c is not available Apr 14 16:58:56.469: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:56.469: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:56.469: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:57.463: INFO: Wrong image for pod: daemon-set-ksv5c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:57.463: INFO: Pod daemon-set-ksv5c is not available Apr 14 16:58:57.467: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:57.467: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:57.467: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:58.466: INFO: Wrong image for pod: daemon-set-ksv5c. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 14 16:58:58.466: INFO: Pod daemon-set-ksv5c is not available Apr 14 16:58:58.469: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:58.469: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:58.469: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:59.466: INFO: Pod daemon-set-fn4ml is not available Apr 14 16:58:59.471: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:59.471: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:59.471: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 14 16:58:59.475: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:59.475: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:59.475: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:58:59.478: INFO: Number of nodes with available pods: 1 Apr 14 16:58:59.478: INFO: Node node2 is running more than one daemon pod Apr 14 16:59:00.486: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:00.486: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:00.486: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:00.489: INFO: Number of nodes with available pods: 1 Apr 14 16:59:00.489: INFO: Node node2 is running more than one daemon pod Apr 14 16:59:01.486: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:01.486: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:01.486: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:01.490: INFO: Number of nodes with available pods: 1 Apr 14 16:59:01.490: INFO: Node node2 is running more than one daemon pod Apr 14 16:59:02.484: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:02.484: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:02.484: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:02.487: INFO: Number of nodes with available pods: 2 Apr 14 16:59:02.487: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3197, will wait for the garbage collector to delete the pods Apr 14 16:59:02.562: INFO: Deleting DaemonSet.extensions daemon-set took: 5.941702ms Apr 14 16:59:03.163: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.542063ms Apr 14 16:59:09.266: INFO: Number of nodes with available pods: 0 Apr 14 16:59:09.266: INFO: Number of running nodes: 0, number of available pods: 0 Apr 14 16:59:09.268: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3197/daemonsets","resourceVersion":"42519"},"items":null} Apr 14 16:59:09.270: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3197/pods","resourceVersion":"42519"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:59:09.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3197" for this suite. • [SLOW TEST:30.011 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":14,"completed":6,"skipped":2107,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:59:09.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-8184 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 14 16:59:09.429: INFO: Create a RollingUpdate DaemonSet Apr 14 16:59:09.433: INFO: Check that daemon pods launch on every node of the cluster Apr 14 16:59:09.437: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:09.437: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:09.437: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:09.439: INFO: Number of nodes with available pods: 0 Apr 14 16:59:09.439: INFO: Node node1 is running more than one daemon pod Apr 14 16:59:10.445: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:10.445: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:10.445: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:10.447: INFO: Number of nodes with available pods: 0 Apr 14 16:59:10.447: INFO: Node node1 is running more than one daemon pod Apr 14 16:59:11.446: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:11.446: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:11.446: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:11.448: INFO: Number of nodes with available pods: 0 Apr 14 16:59:11.448: INFO: Node node1 is running more than one daemon pod Apr 14 16:59:12.444: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:12.444: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:12.444: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:12.448: INFO: Number of nodes with available pods: 0 Apr 14 16:59:12.448: INFO: Node node1 is running more than one daemon pod Apr 14 16:59:13.445: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:13.445: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:13.445: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:13.448: INFO: Number of nodes with available pods: 2 Apr 14 16:59:13.448: INFO: Number of running nodes: 2, number of available pods: 2 Apr 14 16:59:13.448: INFO: Update the DaemonSet to trigger a rollout Apr 14 16:59:13.456: INFO: Updating DaemonSet daemon-set Apr 14 16:59:18.471: INFO: Roll back the DaemonSet before rollout is complete Apr 14 16:59:18.478: INFO: Updating DaemonSet daemon-set Apr 14 16:59:18.478: INFO: Make sure DaemonSet rollback is complete Apr 14 16:59:18.480: INFO: Wrong image for pod: daemon-set-7t9fr. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 14 16:59:18.480: INFO: Pod daemon-set-7t9fr is not available Apr 14 16:59:18.484: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:18.484: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:18.484: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:19.489: INFO: Wrong image for pod: daemon-set-7t9fr. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 14 16:59:19.489: INFO: Pod daemon-set-7t9fr is not available Apr 14 16:59:19.493: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:19.493: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:19.493: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:20.490: INFO: Pod daemon-set-zs62w is not available Apr 14 16:59:20.494: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:20.495: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 16:59:20.495: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8184, will wait for the garbage collector to delete the pods Apr 14 16:59:20.558: INFO: Deleting DaemonSet.extensions daemon-set took: 4.89927ms Apr 14 16:59:20.658: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.479886ms Apr 14 16:59:29.262: INFO: Number of nodes with available pods: 0 Apr 14 16:59:29.262: INFO: Number of running nodes: 0, number of available pods: 0 Apr 14 16:59:29.265: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8184/daemonsets","resourceVersion":"42713"},"items":null} Apr 14 16:59:29.267: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8184/pods","resourceVersion":"42713"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:59:29.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8184" for this suite. • [SLOW TEST:19.997 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":14,"completed":7,"skipped":2127,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:59:29.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-7452 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 14 16:59:29.414: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 14 16:59:29.429: INFO: Waiting for terminating namespaces to be deleted... Apr 14 16:59:29.432: INFO: Logging pods the kubelet thinks is on node node1 before test Apr 14 16:59:29.450: INFO: prometheus-k8s-0 from monitoring started at 2021-04-14 15:33:18 +0000 UTC (5 container statuses recorded) Apr 14 16:59:29.451: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 14 16:59:29.451: INFO: Container grafana ready: true, restart count 0 Apr 14 16:59:29.451: INFO: Container prometheus ready: true, restart count 1 Apr 14 16:59:29.451: INFO: Container prometheus-config-reloader ready: true, restart count 0 Apr 14 16:59:29.451: INFO: Container rules-configmap-reloader ready: true, restart count 0 Apr 14 16:59:29.451: INFO: cmk-d5wr4 from kube-system started at 2021-04-14 15:32:05 +0000 UTC (2 container statuses recorded) Apr 14 16:59:29.451: INFO: Container nodereport ready: true, restart count 0 Apr 14 16:59:29.451: INFO: Container reconcile ready: true, restart count 0 Apr 14 16:59:29.451: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-mlc4d from kube-system started at 2021-04-14 15:29:23 +0000 UTC (1 container statuses recorded) Apr 14 16:59:29.451: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 14 16:59:29.451: INFO: node-feature-discovery-worker-ps9wk from kube-system started at 2021-04-14 15:28:21 +0000 UTC (1 container statuses recorded) Apr 14 16:59:29.451: INFO: Container nfd-worker ready: true, restart count 0 Apr 14 16:59:29.451: INFO: node-exporter-zzqpq from monitoring started at 2021-04-14 15:33:00 +0000 UTC (2 container statuses recorded) Apr 14 16:59:29.451: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 14 16:59:29.451: INFO: Container node-exporter ready: true, restart count 0 Apr 14 16:59:29.451: INFO: collectd-sc5nx from monitoring started at 2021-04-14 15:36:31 +0000 UTC (3 container statuses recorded) Apr 14 16:59:29.451: INFO: Container collectd ready: true, restart count 0 Apr 14 16:59:29.451: INFO: Container collectd-exporter ready: true, restart count 0 Apr 14 16:59:29.451: INFO: Container rbac-proxy ready: true, restart count 0 Apr 14 16:59:29.451: INFO: kube-multus-ds-amd64-jdgxh from kube-system started at 2021-04-14 15:22:51 +0000 UTC (1 container statuses recorded) Apr 14 16:59:29.451: INFO: Container kube-multus ready: true, restart count 1 Apr 14 16:59:29.451: INFO: cmk-init-discover-node1-ppgf5 from kube-system started at 2021-04-14 15:31:02 +0000 UTC (3 container statuses recorded) Apr 14 16:59:29.451: INFO: Container discover ready: false, restart count 0 Apr 14 16:59:29.451: INFO: Container init ready: false, restart count 0 Apr 14 16:59:29.451: INFO: Container install ready: false, restart count 0 Apr 14 16:59:29.451: INFO: kubernetes-dashboard-57777fbdcb-5tc7z from kube-system started at 2021-04-14 15:23:17 +0000 UTC (1 container statuses recorded) Apr 14 16:59:29.451: INFO: Container kubernetes-dashboard ready: true, restart count 2 Apr 14 16:59:29.451: INFO: nginx-proxy-node1 from kube-system started at 2021-04-14 15:22:02 +0000 UTC (1 container statuses recorded) Apr 14 16:59:29.451: INFO: Container nginx-proxy ready: true, restart count 1 Apr 14 16:59:29.451: INFO: kube-flannel-94jrd from kube-system started at 2021-04-14 15:22:40 +0000 UTC (1 container statuses recorded) Apr 14 16:59:29.451: INFO: Container kube-flannel ready: true, restart count 1 Apr 14 16:59:29.451: INFO: cmk-init-discover-node2-lqbjq from kube-system started at 2021-04-14 15:31:22 +0000 UTC (3 container statuses recorded) Apr 14 16:59:29.451: INFO: Container discover ready: false, restart count 0 Apr 14 16:59:29.451: INFO: Container init ready: false, restart count 0 Apr 14 16:59:29.451: INFO: Container install ready: false, restart count 0 Apr 14 16:59:29.451: INFO: cmk-webhook-888945845-9ctsr from kube-system started at 2021-04-14 15:32:06 +0000 UTC (1 container statuses recorded) Apr 14 16:59:29.451: INFO: Container cmk-webhook ready: true, restart count 0 Apr 14 16:59:29.451: INFO: kube-proxy-6kqs6 from kube-system started at 2021-04-14 15:22:02 +0000 UTC (1 container statuses recorded) Apr 14 16:59:29.451: INFO: Container kube-proxy ready: true, restart count 1 Apr 14 16:59:29.451: INFO: Logging pods the kubelet thinks is on node node2 before test Apr 14 16:59:29.468: INFO: kube-proxy-mr5c7 from kube-system started at 2021-04-14 15:22:02 +0000 UTC (1 container statuses recorded) Apr 14 16:59:29.468: INFO: Container kube-proxy ready: true, restart count 2 Apr 14 16:59:29.468: INFO: node-feature-discovery-worker-jx2kp from kube-system started at 2021-04-14 15:28:21 +0000 UTC (1 container statuses recorded) Apr 14 16:59:29.468: INFO: Container nfd-worker ready: true, restart count 0 Apr 14 16:59:29.468: INFO: kube-multus-ds-amd64-2ptgq from kube-system started at 2021-04-14 15:22:51 +0000 UTC (1 container statuses recorded) Apr 14 16:59:29.468: INFO: Container kube-multus ready: true, restart count 2 Apr 14 16:59:29.468: INFO: node-exporter-pdn2v from monitoring started at 2021-04-14 15:33:00 +0000 UTC (2 container statuses recorded) Apr 14 16:59:29.468: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 14 16:59:29.468: INFO: Container node-exporter ready: true, restart count 0 Apr 14 16:59:29.468: INFO: collectd-l2bgc from monitoring started at 2021-04-14 15:36:31 +0000 UTC (3 container statuses recorded) Apr 14 16:59:29.468: INFO: Container collectd ready: true, restart count 0 Apr 14 16:59:29.468: INFO: Container collectd-exporter ready: true, restart count 0 Apr 14 16:59:29.468: INFO: Container rbac-proxy ready: true, restart count 0 Apr 14 16:59:29.468: INFO: prometheus-operator-f66f5fb4d-w6k89 from monitoring started at 2021-04-14 15:32:53 +0000 UTC (2 container statuses recorded) Apr 14 16:59:29.468: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 14 16:59:29.468: INFO: Container prometheus-operator ready: true, restart count 0 Apr 14 16:59:29.468: INFO: nginx-proxy-node2 from kube-system started at 2021-04-14 15:22:02 +0000 UTC (1 container statuses recorded) Apr 14 16:59:29.468: INFO: Container nginx-proxy ready: true, restart count 2 Apr 14 16:59:29.468: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-57s5d from kube-system started at 2021-04-14 15:29:23 +0000 UTC (1 container statuses recorded) Apr 14 16:59:29.468: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 14 16:59:29.468: INFO: cmk-init-discover-node2-tqmv6 from kube-system started at 2021-04-14 15:31:42 +0000 UTC (3 container statuses recorded) Apr 14 16:59:29.468: INFO: Container discover ready: false, restart count 0 Apr 14 16:59:29.468: INFO: Container init ready: false, restart count 0 Apr 14 16:59:29.468: INFO: Container install ready: false, restart count 0 Apr 14 16:59:29.468: INFO: tas-telemetry-aware-scheduling-5ffb6fd745-89pr4 from monitoring started at 2021-04-14 15:35:55 +0000 UTC (2 container statuses recorded) Apr 14 16:59:29.468: INFO: Container tas-controller ready: true, restart count 0 Apr 14 16:59:29.468: INFO: Container tas-extender ready: true, restart count 0 Apr 14 16:59:29.468: INFO: cmk-5gbnz from kube-system started at 2021-04-14 15:32:06 +0000 UTC (2 container statuses recorded) Apr 14 16:59:29.468: INFO: Container nodereport ready: true, restart count 0 Apr 14 16:59:29.468: INFO: Container reconcile ready: true, restart count 0 Apr 14 16:59:29.468: INFO: kube-flannel-5mrxg from kube-system started at 2021-04-14 15:22:40 +0000 UTC (1 container statuses recorded) Apr 14 16:59:29.468: INFO: Container kube-flannel ready: true, restart count 3 Apr 14 16:59:29.468: INFO: kubernetes-metrics-scraper-54fbb4d595-l4rpk from kube-system started at 2021-04-14 15:23:17 +0000 UTC (1 container statuses recorded) Apr 14 16:59:29.468: INFO: Container kubernetes-metrics-scraper ready: true, restart count 3 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c731d0ed-7ba9-4cdc-9898-6ffcb3ee3b2e 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-c731d0ed-7ba9-4cdc-9898-6ffcb3ee3b2e off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-c731d0ed-7ba9-4cdc-9898-6ffcb3ee3b2e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:59:37.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7452" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.265 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":14,"completed":8,"skipped":2459,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:59:37.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-1586 STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-6586 STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-3393 STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:59:52.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1586" for this suite. STEP: Destroying namespace "nsdeletetest-6586" for this suite. Apr 14 16:59:52.978: INFO: Namespace nsdeletetest-6586 was already deleted STEP: Destroying namespace "nsdeletetest-3393" for this suite. • [SLOW TEST:15.407 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":14,"completed":9,"skipped":3706,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:59:52.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-7935 STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nspatchtest-80b86402-6dc6-4937-94c9-eeac032737ab-8585 STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:59:53.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7935" for this suite. STEP: Destroying namespace "nspatchtest-80b86402-6dc6-4937-94c9-eeac032737ab-8585" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":14,"completed":10,"skipped":3963,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:59:53.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in namespaces-4389 STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-7683 STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in nsdeletetest-9333 STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 16:59:59.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4389" for this suite. STEP: Destroying namespace "nsdeletetest-7683" for this suite. Apr 14 16:59:59.634: INFO: Namespace nsdeletetest-7683 was already deleted STEP: Destroying namespace "nsdeletetest-9333" for this suite. • [SLOW TEST:6.392 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":14,"completed":11,"skipped":3985,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 16:59:59.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-4109 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Apr 14 16:59:59.781: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 14 16:59:59.787: INFO: Number of nodes with available pods: 0 Apr 14 16:59:59.787: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 14 16:59:59.801: INFO: Number of nodes with available pods: 0 Apr 14 16:59:59.801: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:00.804: INFO: Number of nodes with available pods: 0 Apr 14 17:00:00.804: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:01.805: INFO: Number of nodes with available pods: 0 Apr 14 17:00:01.805: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:02.805: INFO: Number of nodes with available pods: 0 Apr 14 17:00:02.805: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:03.805: INFO: Number of nodes with available pods: 1 Apr 14 17:00:03.805: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 14 17:00:03.819: INFO: Number of nodes with available pods: 1 Apr 14 17:00:03.819: INFO: Number of running nodes: 0, number of available pods: 1 Apr 14 17:00:04.824: INFO: Number of nodes with available pods: 0 Apr 14 17:00:04.824: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 14 17:00:04.831: INFO: Number of nodes with available pods: 0 Apr 14 17:00:04.831: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:05.835: INFO: Number of nodes with available pods: 0 Apr 14 17:00:05.835: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:06.834: INFO: Number of nodes with available pods: 0 Apr 14 17:00:06.834: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:07.835: INFO: Number of nodes with available pods: 0 Apr 14 17:00:07.835: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:08.839: INFO: Number of nodes with available pods: 0 Apr 14 17:00:08.839: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:09.835: INFO: Number of nodes with available pods: 0 Apr 14 17:00:09.835: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:10.834: INFO: Number of nodes with available pods: 0 Apr 14 17:00:10.834: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:11.835: INFO: Number of nodes with available pods: 0 Apr 14 17:00:11.835: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:12.838: INFO: Number of nodes with available pods: 0 Apr 14 17:00:12.839: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:13.834: INFO: Number of nodes with available pods: 1 Apr 14 17:00:13.834: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4109, will wait for the garbage collector to delete the pods Apr 14 17:00:13.898: INFO: Deleting DaemonSet.extensions daemon-set took: 5.552613ms Apr 14 17:00:14.498: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.279528ms Apr 14 17:00:19.201: INFO: Number of nodes with available pods: 0 Apr 14 17:00:19.201: INFO: Number of running nodes: 0, number of available pods: 0 Apr 14 17:00:19.204: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4109/daemonsets","resourceVersion":"43168"},"items":null} Apr 14 17:00:19.207: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4109/pods","resourceVersion":"43168"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 17:00:19.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4109" for this suite. • [SLOW TEST:19.589 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":14,"completed":12,"skipped":4279,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 17:00:19.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in daemonsets-8767 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 14 17:00:19.380: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:19.380: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:19.380: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:19.382: INFO: Number of nodes with available pods: 0 Apr 14 17:00:19.382: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:20.389: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:20.389: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:20.389: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:20.391: INFO: Number of nodes with available pods: 0 Apr 14 17:00:20.391: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:21.388: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:21.388: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:21.388: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:21.391: INFO: Number of nodes with available pods: 0 Apr 14 17:00:21.391: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:22.389: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:22.389: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:22.390: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:22.393: INFO: Number of nodes with available pods: 0 Apr 14 17:00:22.393: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:23.388: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:23.388: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:23.388: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:23.390: INFO: Number of nodes with available pods: 1 Apr 14 17:00:23.390: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:24.389: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:24.389: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:24.389: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:24.391: INFO: Number of nodes with available pods: 2 Apr 14 17:00:24.391: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 14 17:00:24.405: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:24.405: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:24.405: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:24.408: INFO: Number of nodes with available pods: 1 Apr 14 17:00:24.408: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:25.413: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:25.413: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:25.413: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:25.417: INFO: Number of nodes with available pods: 1 Apr 14 17:00:25.417: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:26.416: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:26.416: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:26.416: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:26.419: INFO: Number of nodes with available pods: 1 Apr 14 17:00:26.419: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:27.414: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:27.414: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:27.414: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:27.417: INFO: Number of nodes with available pods: 1 Apr 14 17:00:27.417: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:28.414: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:28.414: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:28.414: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:28.417: INFO: Number of nodes with available pods: 1 Apr 14 17:00:28.417: INFO: Node node1 is running more than one daemon pod Apr 14 17:00:29.413: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:29.413: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:29.413: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 14 17:00:29.416: INFO: Number of nodes with available pods: 2 Apr 14 17:00:29.416: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8767, will wait for the garbage collector to delete the pods Apr 14 17:00:29.478: INFO: Deleting DaemonSet.extensions daemon-set took: 4.875276ms Apr 14 17:00:29.578: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.23692ms Apr 14 17:00:39.182: INFO: Number of nodes with available pods: 0 Apr 14 17:00:39.182: INFO: Number of running nodes: 0, number of available pods: 0 Apr 14 17:00:39.184: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8767/daemonsets","resourceVersion":"43326"},"items":null} Apr 14 17:00:39.185: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8767/pods","resourceVersion":"43326"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 17:00:39.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8767" for this suite. • [SLOW TEST:19.972 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":14,"completed":13,"skipped":4316,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Apr 14 17:00:39.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-pred-1343 STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 14 17:00:39.338: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 14 17:00:39.351: INFO: Waiting for terminating namespaces to be deleted... Apr 14 17:00:39.354: INFO: Logging pods the kubelet thinks is on node node1 before test Apr 14 17:00:39.368: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-mlc4d from kube-system started at 2021-04-14 15:29:23 +0000 UTC (1 container statuses recorded) Apr 14 17:00:39.368: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 14 17:00:39.368: INFO: node-feature-discovery-worker-ps9wk from kube-system started at 2021-04-14 15:28:21 +0000 UTC (1 container statuses recorded) Apr 14 17:00:39.368: INFO: Container nfd-worker ready: true, restart count 0 Apr 14 17:00:39.368: INFO: node-exporter-zzqpq from monitoring started at 2021-04-14 15:33:00 +0000 UTC (2 container statuses recorded) Apr 14 17:00:39.368: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 14 17:00:39.368: INFO: Container node-exporter ready: true, restart count 0 Apr 14 17:00:39.368: INFO: collectd-sc5nx from monitoring started at 2021-04-14 15:36:31 +0000 UTC (3 container statuses recorded) Apr 14 17:00:39.368: INFO: Container collectd ready: true, restart count 0 Apr 14 17:00:39.368: INFO: Container collectd-exporter ready: true, restart count 0 Apr 14 17:00:39.368: INFO: Container rbac-proxy ready: true, restart count 0 Apr 14 17:00:39.368: INFO: kube-multus-ds-amd64-jdgxh from kube-system started at 2021-04-14 15:22:51 +0000 UTC (1 container statuses recorded) Apr 14 17:00:39.368: INFO: Container kube-multus ready: true, restart count 1 Apr 14 17:00:39.368: INFO: cmk-init-discover-node1-ppgf5 from kube-system started at 2021-04-14 15:31:02 +0000 UTC (3 container statuses recorded) Apr 14 17:00:39.368: INFO: Container discover ready: false, restart count 0 Apr 14 17:00:39.368: INFO: Container init ready: false, restart count 0 Apr 14 17:00:39.368: INFO: Container install ready: false, restart count 0 Apr 14 17:00:39.368: INFO: kubernetes-dashboard-57777fbdcb-5tc7z from kube-system started at 2021-04-14 15:23:17 +0000 UTC (1 container statuses recorded) Apr 14 17:00:39.368: INFO: Container kubernetes-dashboard ready: true, restart count 2 Apr 14 17:00:39.368: INFO: nginx-proxy-node1 from kube-system started at 2021-04-14 15:22:02 +0000 UTC (1 container statuses recorded) Apr 14 17:00:39.368: INFO: Container nginx-proxy ready: true, restart count 1 Apr 14 17:00:39.368: INFO: kube-flannel-94jrd from kube-system started at 2021-04-14 15:22:40 +0000 UTC (1 container statuses recorded) Apr 14 17:00:39.368: INFO: Container kube-flannel ready: true, restart count 1 Apr 14 17:00:39.368: INFO: cmk-init-discover-node2-lqbjq from kube-system started at 2021-04-14 15:31:22 +0000 UTC (3 container statuses recorded) Apr 14 17:00:39.368: INFO: Container discover ready: false, restart count 0 Apr 14 17:00:39.368: INFO: Container init ready: false, restart count 0 Apr 14 17:00:39.368: INFO: Container install ready: false, restart count 0 Apr 14 17:00:39.368: INFO: cmk-webhook-888945845-9ctsr from kube-system started at 2021-04-14 15:32:06 +0000 UTC (1 container statuses recorded) Apr 14 17:00:39.368: INFO: Container cmk-webhook ready: true, restart count 0 Apr 14 17:00:39.368: INFO: kube-proxy-6kqs6 from kube-system started at 2021-04-14 15:22:02 +0000 UTC (1 container statuses recorded) Apr 14 17:00:39.368: INFO: Container kube-proxy ready: true, restart count 1 Apr 14 17:00:39.368: INFO: prometheus-k8s-0 from monitoring started at 2021-04-14 15:33:18 +0000 UTC (5 container statuses recorded) Apr 14 17:00:39.368: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 14 17:00:39.368: INFO: Container grafana ready: true, restart count 0 Apr 14 17:00:39.368: INFO: Container prometheus ready: true, restart count 1 Apr 14 17:00:39.368: INFO: Container prometheus-config-reloader ready: true, restart count 0 Apr 14 17:00:39.368: INFO: Container rules-configmap-reloader ready: true, restart count 0 Apr 14 17:00:39.368: INFO: cmk-d5wr4 from kube-system started at 2021-04-14 15:32:05 +0000 UTC (2 container statuses recorded) Apr 14 17:00:39.368: INFO: Container nodereport ready: true, restart count 0 Apr 14 17:00:39.368: INFO: Container reconcile ready: true, restart count 0 Apr 14 17:00:39.368: INFO: Logging pods the kubelet thinks is on node node2 before test Apr 14 17:00:39.380: INFO: cmk-init-discover-node2-tqmv6 from kube-system started at 2021-04-14 15:31:42 +0000 UTC (3 container statuses recorded) Apr 14 17:00:39.380: INFO: Container discover ready: false, restart count 0 Apr 14 17:00:39.380: INFO: Container init ready: false, restart count 0 Apr 14 17:00:39.380: INFO: Container install ready: false, restart count 0 Apr 14 17:00:39.380: INFO: nginx-proxy-node2 from kube-system started at 2021-04-14 15:22:02 +0000 UTC (1 container statuses recorded) Apr 14 17:00:39.380: INFO: Container nginx-proxy ready: true, restart count 2 Apr 14 17:00:39.380: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-57s5d from kube-system started at 2021-04-14 15:29:23 +0000 UTC (1 container statuses recorded) Apr 14 17:00:39.380: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 14 17:00:39.380: INFO: tas-telemetry-aware-scheduling-5ffb6fd745-89pr4 from monitoring started at 2021-04-14 15:35:55 +0000 UTC (2 container statuses recorded) Apr 14 17:00:39.380: INFO: Container tas-controller ready: true, restart count 0 Apr 14 17:00:39.380: INFO: Container tas-extender ready: true, restart count 0 Apr 14 17:00:39.380: INFO: cmk-5gbnz from kube-system started at 2021-04-14 15:32:06 +0000 UTC (2 container statuses recorded) Apr 14 17:00:39.380: INFO: Container nodereport ready: true, restart count 0 Apr 14 17:00:39.380: INFO: Container reconcile ready: true, restart count 0 Apr 14 17:00:39.380: INFO: kube-flannel-5mrxg from kube-system started at 2021-04-14 15:22:40 +0000 UTC (1 container statuses recorded) Apr 14 17:00:39.380: INFO: Container kube-flannel ready: true, restart count 3 Apr 14 17:00:39.380: INFO: kubernetes-metrics-scraper-54fbb4d595-l4rpk from kube-system started at 2021-04-14 15:23:17 +0000 UTC (1 container statuses recorded) Apr 14 17:00:39.380: INFO: Container kubernetes-metrics-scraper ready: true, restart count 3 Apr 14 17:00:39.380: INFO: kube-proxy-mr5c7 from kube-system started at 2021-04-14 15:22:02 +0000 UTC (1 container statuses recorded) Apr 14 17:00:39.380: INFO: Container kube-proxy ready: true, restart count 2 Apr 14 17:00:39.380: INFO: node-feature-discovery-worker-jx2kp from kube-system started at 2021-04-14 15:28:21 +0000 UTC (1 container statuses recorded) Apr 14 17:00:39.380: INFO: Container nfd-worker ready: true, restart count 0 Apr 14 17:00:39.380: INFO: collectd-l2bgc from monitoring started at 2021-04-14 15:36:31 +0000 UTC (3 container statuses recorded) Apr 14 17:00:39.380: INFO: Container collectd ready: true, restart count 0 Apr 14 17:00:39.380: INFO: Container collectd-exporter ready: true, restart count 0 Apr 14 17:00:39.380: INFO: Container rbac-proxy ready: true, restart count 0 Apr 14 17:00:39.380: INFO: kube-multus-ds-amd64-2ptgq from kube-system started at 2021-04-14 15:22:51 +0000 UTC (1 container statuses recorded) Apr 14 17:00:39.380: INFO: Container kube-multus ready: true, restart count 2 Apr 14 17:00:39.380: INFO: node-exporter-pdn2v from monitoring started at 2021-04-14 15:33:00 +0000 UTC (2 container statuses recorded) Apr 14 17:00:39.380: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 14 17:00:39.380: INFO: Container node-exporter ready: true, restart count 0 Apr 14 17:00:39.380: INFO: prometheus-operator-f66f5fb4d-w6k89 from monitoring started at 2021-04-14 15:32:53 +0000 UTC (2 container statuses recorded) Apr 14 17:00:39.380: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 14 17:00:39.380: INFO: Container prometheus-operator ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1675c81658b5e75e], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.1675c81658fa24ad], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Apr 14 17:00:40.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1343" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":14,"completed":14,"skipped":4663,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSApr 14 17:00:40.433: INFO: Running AfterSuite actions on all nodes Apr 14 17:00:40.433: INFO: Running AfterSuite actions on node 1 Apr 14 17:00:40.433: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":14,"completed":14,"skipped":4980,"failed":0} Ran 14 of 4994 Specs in 582.500 seconds SUCCESS! -- 14 Passed | 0 Failed | 0 Pending | 4980 Skipped PASS Ginkgo ran 1 suite in 9m43.606917923s Test Suite Passed