I1122 01:02:40.550242 20 e2e.go:129] Starting e2e run "541f2054-deb2-4bda-a09c-331ea2b12a71" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1637542959 - Will randomize all specs Will run 17 of 5770 specs Nov 22 01:02:40.613: INFO: >>> kubeConfig: /root/.kube/config Nov 22 01:02:40.618: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 22 01:02:40.647: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 22 01:02:40.708: INFO: The status of Pod cmk-init-discover-node1-brwt6 is Succeeded, skipping waiting Nov 22 01:02:40.708: INFO: The status of Pod cmk-init-discover-node2-8jdqf is Succeeded, skipping waiting Nov 22 01:02:40.708: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 22 01:02:40.708: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 22 01:02:40.708: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 22 01:02:40.725: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Nov 22 01:02:40.725: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Nov 22 01:02:40.725: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Nov 22 01:02:40.725: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Nov 22 01:02:40.725: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Nov 22 01:02:40.725: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Nov 22 01:02:40.725: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Nov 22 01:02:40.725: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 22 01:02:40.725: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Nov 22 01:02:40.725: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Nov 22 01:02:40.725: INFO: e2e test version: v1.21.5 Nov 22 01:02:40.726: INFO: kube-apiserver version: v1.21.1 Nov 22 01:02:40.726: INFO: >>> kubeConfig: /root/.kube/config Nov 22 01:02:40.733: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 01:02:40.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred W1122 01:02:40.762446 20 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 22 01:02:40.762: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 22 01:02:40.766: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 22 01:02:40.768: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 22 01:02:40.777: INFO: Waiting for terminating namespaces to be deleted... Nov 22 01:02:40.780: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 22 01:02:40.788: INFO: cmk-7wvgm from kube-system started at 2021-11-21 22:38:17 +0000 UTC (2 container statuses recorded) Nov 22 01:02:40.789: INFO: Container nodereport ready: true, restart count 0 Nov 22 01:02:40.789: INFO: Container reconcile ready: true, restart count 0 Nov 22 01:02:40.789: INFO: cmk-init-discover-node1-brwt6 from kube-system started at 2021-11-21 22:37:36 +0000 UTC (3 container statuses recorded) Nov 22 01:02:40.789: INFO: Container discover ready: false, restart count 0 Nov 22 01:02:40.789: INFO: Container init ready: false, restart count 0 Nov 22 01:02:40.789: INFO: Container install ready: false, restart count 0 Nov 22 01:02:40.789: INFO: kube-flannel-cfzcv from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 01:02:40.789: INFO: Container kube-flannel ready: true, restart count 1 Nov 22 01:02:40.789: INFO: kube-multus-ds-amd64-wcr4n from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 01:02:40.789: INFO: Container kube-multus ready: true, restart count 1 Nov 22 01:02:40.789: INFO: kube-proxy-mb5cq from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 01:02:40.789: INFO: Container kube-proxy ready: true, restart count 1 Nov 22 01:02:40.789: INFO: kubernetes-metrics-scraper-5558854cb-kzhf7 from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 01:02:40.789: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 Nov 22 01:02:40.789: INFO: nginx-proxy-node1 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 01:02:40.789: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 01:02:40.789: INFO: node-feature-discovery-worker-lkpb8 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 01:02:40.789: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 01:02:40.789: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9xds6 from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 01:02:40.789: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 01:02:40.789: INFO: collectd-zmh78 from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 01:02:40.789: INFO: Container collectd ready: true, restart count 0 Nov 22 01:02:40.789: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 01:02:40.789: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 01:02:40.789: INFO: node-exporter-jj5rx from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 01:02:40.789: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 01:02:40.789: INFO: Container node-exporter ready: true, restart count 0 Nov 22 01:02:40.789: INFO: prometheus-k8s-0 from monitoring started at 2021-11-21 22:39:32 +0000 UTC (4 container statuses recorded) Nov 22 01:02:40.789: INFO: Container config-reloader ready: true, restart count 0 Nov 22 01:02:40.789: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 22 01:02:40.789: INFO: Container grafana ready: true, restart count 0 Nov 22 01:02:40.789: INFO: Container prometheus ready: true, restart count 1 Nov 22 01:02:40.789: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 22 01:02:40.799: INFO: cmk-init-discover-node2-8jdqf from kube-system started at 2021-11-21 22:37:56 +0000 UTC (3 container statuses recorded) Nov 22 01:02:40.799: INFO: Container discover ready: false, restart count 0 Nov 22 01:02:40.799: INFO: Container init ready: false, restart count 0 Nov 22 01:02:40.799: INFO: Container install ready: false, restart count 0 Nov 22 01:02:40.799: INFO: cmk-prx26 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (2 container statuses recorded) Nov 22 01:02:40.799: INFO: Container nodereport ready: true, restart count 0 Nov 22 01:02:40.799: INFO: Container reconcile ready: true, restart count 0 Nov 22 01:02:40.799: INFO: cmk-webhook-6c9d5f8578-8fxd8 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (1 container statuses recorded) Nov 22 01:02:40.799: INFO: Container cmk-webhook ready: true, restart count 0 Nov 22 01:02:40.799: INFO: kube-flannel-rdjt7 from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 01:02:40.799: INFO: Container kube-flannel ready: true, restart count 2 Nov 22 01:02:40.799: INFO: kube-multus-ds-amd64-6bg2m from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 01:02:40.799: INFO: Container kube-multus ready: true, restart count 1 Nov 22 01:02:40.799: INFO: kube-proxy-5xb56 from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 01:02:40.799: INFO: Container kube-proxy ready: true, restart count 2 Nov 22 01:02:40.799: INFO: kubernetes-dashboard-785dcbb76d-wrkrj from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 01:02:40.799: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 22 01:02:40.799: INFO: nginx-proxy-node2 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 01:02:40.799: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 01:02:40.799: INFO: node-feature-discovery-worker-slrp4 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 01:02:40.799: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 01:02:40.799: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9jdcq from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 01:02:40.799: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 01:02:40.799: INFO: collectd-6t47m from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 01:02:40.799: INFO: Container collectd ready: true, restart count 0 Nov 22 01:02:40.799: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 01:02:40.799: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 01:02:40.799: INFO: node-exporter-r2vkb from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 01:02:40.799: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 01:02:40.799: INFO: Container node-exporter ready: true, restart count 0 Nov 22 01:02:40.799: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q64pf from monitoring started at 2021-11-21 22:42:22 +0000 UTC (1 container statuses recorded) Nov 22 01:02:40.799: INFO: Container tas-extender ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: verifying the node has the label node node1 STEP: verifying the node has the label node node2 Nov 22 01:02:40.853: INFO: Pod cmk-7wvgm requesting resource cpu=0m on Node node1 Nov 22 01:02:40.853: INFO: Pod cmk-prx26 requesting resource cpu=0m on Node node2 Nov 22 01:02:40.853: INFO: Pod cmk-webhook-6c9d5f8578-8fxd8 requesting resource cpu=0m on Node node2 Nov 22 01:02:40.853: INFO: Pod kube-flannel-cfzcv requesting resource cpu=150m on Node node1 Nov 22 01:02:40.853: INFO: Pod kube-flannel-rdjt7 requesting resource cpu=150m on Node node2 Nov 22 01:02:40.853: INFO: Pod kube-multus-ds-amd64-6bg2m requesting resource cpu=100m on Node node2 Nov 22 01:02:40.853: INFO: Pod kube-multus-ds-amd64-wcr4n requesting resource cpu=100m on Node node1 Nov 22 01:02:40.853: INFO: Pod kube-proxy-5xb56 requesting resource cpu=0m on Node node2 Nov 22 01:02:40.853: INFO: Pod kube-proxy-mb5cq requesting resource cpu=0m on Node node1 Nov 22 01:02:40.853: INFO: Pod kubernetes-dashboard-785dcbb76d-wrkrj requesting resource cpu=50m on Node node2 Nov 22 01:02:40.853: INFO: Pod kubernetes-metrics-scraper-5558854cb-kzhf7 requesting resource cpu=0m on Node node1 Nov 22 01:02:40.853: INFO: Pod nginx-proxy-node1 requesting resource cpu=25m on Node node1 Nov 22 01:02:40.853: INFO: Pod nginx-proxy-node2 requesting resource cpu=25m on Node node2 Nov 22 01:02:40.853: INFO: Pod node-feature-discovery-worker-lkpb8 requesting resource cpu=0m on Node node1 Nov 22 01:02:40.853: INFO: Pod node-feature-discovery-worker-slrp4 requesting resource cpu=0m on Node node2 Nov 22 01:02:40.853: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-9jdcq requesting resource cpu=0m on Node node2 Nov 22 01:02:40.853: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-9xds6 requesting resource cpu=0m on Node node1 Nov 22 01:02:40.853: INFO: Pod collectd-6t47m requesting resource cpu=0m on Node node2 Nov 22 01:02:40.853: INFO: Pod collectd-zmh78 requesting resource cpu=0m on Node node1 Nov 22 01:02:40.853: INFO: Pod node-exporter-jj5rx requesting resource cpu=112m on Node node1 Nov 22 01:02:40.853: INFO: Pod node-exporter-r2vkb requesting resource cpu=112m on Node node2 Nov 22 01:02:40.853: INFO: Pod prometheus-k8s-0 requesting resource cpu=200m on Node node1 Nov 22 01:02:40.853: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-q64pf requesting resource cpu=0m on Node node2 STEP: Starting Pods to consume most of the cluster CPU. Nov 22 01:02:40.853: INFO: Creating a pod which consumes cpu=53489m on Node node1 Nov 22 01:02:40.865: INFO: Creating a pod which consumes cpu=53594m on Node node2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-a58390fc-1019-4c1a-bb3a-a3d1da0ce5bd.16b9b8a596f770f8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1683/filler-pod-a58390fc-1019-4c1a-bb3a-a3d1da0ce5bd to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-a58390fc-1019-4c1a-bb3a-a3d1da0ce5bd.16b9b8a5fc557947], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-a58390fc-1019-4c1a-bb3a-a3d1da0ce5bd.16b9b8a60ff5cfc1], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 329.267824ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-a58390fc-1019-4c1a-bb3a-a3d1da0ce5bd.16b9b8a617c39f6c], Reason = [Created], Message = [Created container filler-pod-a58390fc-1019-4c1a-bb3a-a3d1da0ce5bd] STEP: Considering event: Type = [Normal], Name = [filler-pod-a58390fc-1019-4c1a-bb3a-a3d1da0ce5bd.16b9b8a61f87707a], Reason = [Started], Message = [Started container filler-pod-a58390fc-1019-4c1a-bb3a-a3d1da0ce5bd] STEP: Considering event: Type = [Normal], Name = [filler-pod-f456e31a-0915-4db3-ae75-7eae6c6990a6.16b9b8a59784fa26], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1683/filler-pod-f456e31a-0915-4db3-ae75-7eae6c6990a6 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-f456e31a-0915-4db3-ae75-7eae6c6990a6.16b9b8a5f14c799e], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-f456e31a-0915-4db3-ae75-7eae6c6990a6.16b9b8a6059cd574], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 340.804851ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-f456e31a-0915-4db3-ae75-7eae6c6990a6.16b9b8a60c05e5be], Reason = [Created], Message = [Created container filler-pod-f456e31a-0915-4db3-ae75-7eae6c6990a6] STEP: Considering event: Type = [Normal], Name = [filler-pod-f456e31a-0915-4db3-ae75-7eae6c6990a6.16b9b8a612ec4543], Reason = [Started], Message = [Started container filler-pod-f456e31a-0915-4db3-ae75-7eae6c6990a6] STEP: Considering event: Type = [Warning], Name = [additional-pod.16b9b8a68744b07b], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: removing the label node off the node node1 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node node2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 01:02:45.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1683" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:5.205 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":17,"completed":1,"skipped":155,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 01:02:45.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 01:02:45.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1847" for this suite. STEP: Destroying namespace "nspatchtest-ac01c82e-fea8-46a1-9097-02477cc9c668-2578" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":17,"completed":2,"skipped":254,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 01:02:46.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Nov 22 01:02:46.046: INFO: Waiting up to 1m0s for all nodes to be ready Nov 22 01:03:46.098: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 01:03:46.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 22 01:03:46.131: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Nov 22 01:03:46.134: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 01:03:46.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-4374" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 01:03:46.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8418" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.194 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":17,"completed":3,"skipped":878,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 01:03:46.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Nov 22 01:03:46.254: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:46.254: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:46.254: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:46.256: INFO: Number of nodes with available pods: 0 Nov 22 01:03:46.256: INFO: Node node1 is running more than one daemon pod Nov 22 01:03:47.262: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:47.262: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:47.262: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:47.265: INFO: Number of nodes with available pods: 0 Nov 22 01:03:47.265: INFO: Node node1 is running more than one daemon pod Nov 22 01:03:48.263: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:48.263: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:48.263: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:48.266: INFO: Number of nodes with available pods: 0 Nov 22 01:03:48.266: INFO: Node node1 is running more than one daemon pod Nov 22 01:03:49.262: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:49.262: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:49.262: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:49.266: INFO: Number of nodes with available pods: 1 Nov 22 01:03:49.266: INFO: Node node1 is running more than one daemon pod Nov 22 01:03:50.263: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:50.263: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:50.263: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:50.266: INFO: Number of nodes with available pods: 2 Nov 22 01:03:50.266: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Nov 22 01:03:50.283: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:50.283: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:50.283: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:50.286: INFO: Number of nodes with available pods: 1 Nov 22 01:03:50.286: INFO: Node node2 is running more than one daemon pod Nov 22 01:03:51.292: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:51.292: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:51.292: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:51.295: INFO: Number of nodes with available pods: 1 Nov 22 01:03:51.295: INFO: Node node2 is running more than one daemon pod Nov 22 01:03:52.295: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:52.295: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:52.295: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:52.297: INFO: Number of nodes with available pods: 1 Nov 22 01:03:52.297: INFO: Node node2 is running more than one daemon pod Nov 22 01:03:53.291: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:53.291: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:53.291: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:03:53.293: INFO: Number of nodes with available pods: 2 Nov 22 01:03:53.293: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9362, will wait for the garbage collector to delete the pods Nov 22 01:03:53.356: INFO: Deleting DaemonSet.extensions daemon-set took: 4.844629ms Nov 22 01:03:53.456: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.143012ms Nov 22 01:04:03.460: INFO: Number of nodes with available pods: 0 Nov 22 01:04:03.460: INFO: Number of running nodes: 0, number of available pods: 0 Nov 22 01:04:03.466: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"55240"},"items":null} Nov 22 01:04:03.470: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"55240"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 01:04:03.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9362" for this suite. • [SLOW TEST:17.279 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":17,"completed":4,"skipped":893,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 01:04:03.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 22 01:04:03.525: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 22 01:04:03.540: INFO: Waiting for terminating namespaces to be deleted... Nov 22 01:04:03.542: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 22 01:04:03.549: INFO: cmk-7wvgm from kube-system started at 2021-11-21 22:38:17 +0000 UTC (2 container statuses recorded) Nov 22 01:04:03.549: INFO: Container nodereport ready: true, restart count 0 Nov 22 01:04:03.549: INFO: Container reconcile ready: true, restart count 0 Nov 22 01:04:03.549: INFO: cmk-init-discover-node1-brwt6 from kube-system started at 2021-11-21 22:37:36 +0000 UTC (3 container statuses recorded) Nov 22 01:04:03.549: INFO: Container discover ready: false, restart count 0 Nov 22 01:04:03.549: INFO: Container init ready: false, restart count 0 Nov 22 01:04:03.549: INFO: Container install ready: false, restart count 0 Nov 22 01:04:03.549: INFO: kube-flannel-cfzcv from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 01:04:03.549: INFO: Container kube-flannel ready: true, restart count 1 Nov 22 01:04:03.549: INFO: kube-multus-ds-amd64-wcr4n from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 01:04:03.549: INFO: Container kube-multus ready: true, restart count 1 Nov 22 01:04:03.549: INFO: kube-proxy-mb5cq from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 01:04:03.549: INFO: Container kube-proxy ready: true, restart count 1 Nov 22 01:04:03.549: INFO: kubernetes-metrics-scraper-5558854cb-kzhf7 from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 01:04:03.549: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 Nov 22 01:04:03.549: INFO: nginx-proxy-node1 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 01:04:03.549: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 01:04:03.549: INFO: node-feature-discovery-worker-lkpb8 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 01:04:03.549: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 01:04:03.549: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9xds6 from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 01:04:03.549: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 01:04:03.549: INFO: collectd-zmh78 from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 01:04:03.549: INFO: Container collectd ready: true, restart count 0 Nov 22 01:04:03.549: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 01:04:03.549: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 01:04:03.549: INFO: node-exporter-jj5rx from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 01:04:03.549: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 01:04:03.549: INFO: Container node-exporter ready: true, restart count 0 Nov 22 01:04:03.549: INFO: prometheus-k8s-0 from monitoring started at 2021-11-21 22:39:32 +0000 UTC (4 container statuses recorded) Nov 22 01:04:03.549: INFO: Container config-reloader ready: true, restart count 0 Nov 22 01:04:03.549: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 22 01:04:03.549: INFO: Container grafana ready: true, restart count 0 Nov 22 01:04:03.549: INFO: Container prometheus ready: true, restart count 1 Nov 22 01:04:03.549: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 22 01:04:03.559: INFO: cmk-init-discover-node2-8jdqf from kube-system started at 2021-11-21 22:37:56 +0000 UTC (3 container statuses recorded) Nov 22 01:04:03.559: INFO: Container discover ready: false, restart count 0 Nov 22 01:04:03.559: INFO: Container init ready: false, restart count 0 Nov 22 01:04:03.559: INFO: Container install ready: false, restart count 0 Nov 22 01:04:03.559: INFO: cmk-prx26 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (2 container statuses recorded) Nov 22 01:04:03.559: INFO: Container nodereport ready: true, restart count 0 Nov 22 01:04:03.559: INFO: Container reconcile ready: true, restart count 0 Nov 22 01:04:03.559: INFO: cmk-webhook-6c9d5f8578-8fxd8 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (1 container statuses recorded) Nov 22 01:04:03.559: INFO: Container cmk-webhook ready: true, restart count 0 Nov 22 01:04:03.559: INFO: kube-flannel-rdjt7 from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 01:04:03.559: INFO: Container kube-flannel ready: true, restart count 2 Nov 22 01:04:03.559: INFO: kube-multus-ds-amd64-6bg2m from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 01:04:03.559: INFO: Container kube-multus ready: true, restart count 1 Nov 22 01:04:03.559: INFO: kube-proxy-5xb56 from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 01:04:03.559: INFO: Container kube-proxy ready: true, restart count 2 Nov 22 01:04:03.559: INFO: kubernetes-dashboard-785dcbb76d-wrkrj from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 01:04:03.559: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 22 01:04:03.559: INFO: nginx-proxy-node2 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 01:04:03.559: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 01:04:03.559: INFO: node-feature-discovery-worker-slrp4 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 01:04:03.559: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 01:04:03.559: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9jdcq from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 01:04:03.559: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 01:04:03.559: INFO: collectd-6t47m from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 01:04:03.559: INFO: Container collectd ready: true, restart count 0 Nov 22 01:04:03.559: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 01:04:03.559: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 01:04:03.559: INFO: node-exporter-r2vkb from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 01:04:03.559: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 01:04:03.559: INFO: Container node-exporter ready: true, restart count 0 Nov 22 01:04:03.559: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q64pf from monitoring started at 2021-11-21 22:42:22 +0000 UTC (1 container statuses recorded) Nov 22 01:04:03.559: INFO: Container tas-extender ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4e6adb1e-ebbf-419d-9336-5c4752da6f91 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.10.190.208 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-4e6adb1e-ebbf-419d-9336-5c4752da6f91 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-4e6adb1e-ebbf-419d-9336-5c4752da6f91 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 01:09:11.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-302" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.168 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":17,"completed":5,"skipped":1160,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 01:09:11.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Nov 22 01:09:11.692: INFO: Waiting up to 1m0s for all nodes to be ready Nov 22 01:10:11.745: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 01:10:11.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Nov 22 01:10:15.796: INFO: found a healthy node: node2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 22 01:10:29.856: INFO: pods created so far: [1 1 1] Nov 22 01:10:29.857: INFO: length of pods created so far: 3 Nov 22 01:10:47.870: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 01:10:54.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-2697" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 01:10:54.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1342" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:103.280 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":17,"completed":6,"skipped":1262,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 01:10:54.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Nov 22 01:10:55.259: INFO: Pod name wrapped-volume-race-82fc8983-8d09-4224-bcf2-81c6b462a88d: Found 1 pods out of 5 Nov 22 01:11:00.267: INFO: Pod name wrapped-volume-race-82fc8983-8d09-4224-bcf2-81c6b462a88d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-82fc8983-8d09-4224-bcf2-81c6b462a88d in namespace emptydir-wrapper-2569, will wait for the garbage collector to delete the pods Nov 22 01:11:16.352: INFO: Deleting ReplicationController wrapped-volume-race-82fc8983-8d09-4224-bcf2-81c6b462a88d took: 4.695182ms Nov 22 01:11:16.452: INFO: Terminating ReplicationController wrapped-volume-race-82fc8983-8d09-4224-bcf2-81c6b462a88d pods took: 100.401718ms STEP: Creating RC which spawns configmap-volume pods Nov 22 01:11:24.170: INFO: Pod name wrapped-volume-race-c849f450-3d0e-4182-8933-aec08d6e5766: Found 0 pods out of 5 Nov 22 01:11:29.180: INFO: Pod name wrapped-volume-race-c849f450-3d0e-4182-8933-aec08d6e5766: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c849f450-3d0e-4182-8933-aec08d6e5766 in namespace emptydir-wrapper-2569, will wait for the garbage collector to delete the pods Nov 22 01:11:43.262: INFO: Deleting ReplicationController wrapped-volume-race-c849f450-3d0e-4182-8933-aec08d6e5766 took: 6.281355ms Nov 22 01:11:43.363: INFO: Terminating ReplicationController wrapped-volume-race-c849f450-3d0e-4182-8933-aec08d6e5766 pods took: 100.972695ms STEP: Creating RC which spawns configmap-volume pods Nov 22 01:11:53.483: INFO: Pod name wrapped-volume-race-6e7214b6-e901-4570-a64c-bea7769fe5f7: Found 0 pods out of 5 Nov 22 01:11:58.493: INFO: Pod name wrapped-volume-race-6e7214b6-e901-4570-a64c-bea7769fe5f7: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6e7214b6-e901-4570-a64c-bea7769fe5f7 in namespace emptydir-wrapper-2569, will wait for the garbage collector to delete the pods Nov 22 01:12:14.582: INFO: Deleting ReplicationController wrapped-volume-race-6e7214b6-e901-4570-a64c-bea7769fe5f7 took: 5.195123ms Nov 22 01:12:14.684: INFO: Terminating ReplicationController wrapped-volume-race-6e7214b6-e901-4570-a64c-bea7769fe5f7 pods took: 101.651389ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 01:12:23.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2569" for this suite. • [SLOW TEST:88.724 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":17,"completed":7,"skipped":1752,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 01:12:23.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 22 01:12:23.722: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Nov 22 01:12:23.727: INFO: Number of nodes with available pods: 0 Nov 22 01:12:23.727: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Nov 22 01:12:23.742: INFO: Number of nodes with available pods: 0 Nov 22 01:12:23.742: INFO: Node node2 is running more than one daemon pod Nov 22 01:12:24.746: INFO: Number of nodes with available pods: 0 Nov 22 01:12:24.746: INFO: Node node2 is running more than one daemon pod Nov 22 01:12:25.746: INFO: Number of nodes with available pods: 0 Nov 22 01:12:25.746: INFO: Node node2 is running more than one daemon pod Nov 22 01:12:26.746: INFO: Number of nodes with available pods: 1 Nov 22 01:12:26.746: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Nov 22 01:12:26.763: INFO: Number of nodes with available pods: 1 Nov 22 01:12:26.763: INFO: Number of running nodes: 0, number of available pods: 1 Nov 22 01:12:27.769: INFO: Number of nodes with available pods: 0 Nov 22 01:12:27.769: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Nov 22 01:12:27.781: INFO: Number of nodes with available pods: 0 Nov 22 01:12:27.781: INFO: Node node2 is running more than one daemon pod Nov 22 01:12:28.785: INFO: Number of nodes with available pods: 0 Nov 22 01:12:28.785: INFO: Node node2 is running more than one daemon pod Nov 22 01:12:29.785: INFO: Number of nodes with available pods: 0 Nov 22 01:12:29.785: INFO: Node node2 is running more than one daemon pod Nov 22 01:12:30.785: INFO: Number of nodes with available pods: 0 Nov 22 01:12:30.785: INFO: Node node2 is running more than one daemon pod Nov 22 01:12:31.788: INFO: Number of nodes with available pods: 0 Nov 22 01:12:31.788: INFO: Node node2 is running more than one daemon pod Nov 22 01:12:32.788: INFO: Number of nodes with available pods: 0 Nov 22 01:12:32.788: INFO: Node node2 is running more than one daemon pod Nov 22 01:12:33.785: INFO: Number of nodes with available pods: 0 Nov 22 01:12:33.785: INFO: Node node2 is running more than one daemon pod Nov 22 01:12:34.786: INFO: Number of nodes with available pods: 0 Nov 22 01:12:34.786: INFO: Node node2 is running more than one daemon pod Nov 22 01:12:35.786: INFO: Number of nodes with available pods: 0 Nov 22 01:12:35.786: INFO: Node node2 is running more than one daemon pod Nov 22 01:12:36.788: INFO: Number of nodes with available pods: 1 Nov 22 01:12:36.788: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-128, will wait for the garbage collector to delete the pods Nov 22 01:12:36.850: INFO: Deleting DaemonSet.extensions daemon-set took: 4.360784ms Nov 22 01:12:36.951: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.72929ms Nov 22 01:12:43.456: INFO: Number of nodes with available pods: 0 Nov 22 01:12:43.456: INFO: Number of running nodes: 0, number of available pods: 0 Nov 22 01:12:43.458: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"57750"},"items":null} Nov 22 01:12:43.460: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"57750"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 01:12:43.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-128" for this suite. • [SLOW TEST:19.807 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":17,"completed":8,"skipped":1882,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 01:12:43.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Nov 22 01:12:43.520: INFO: Waiting up to 1m0s for all nodes to be ready Nov 22 01:13:43.586: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Nov 22 01:13:43.616: INFO: Created pod: pod0-sched-preemption-low-priority Nov 22 01:13:43.636: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 01:13:57.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7969" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:74.240 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":17,"completed":9,"skipped":2054,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 01:13:57.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Nov 22 01:13:57.768: INFO: Waiting up to 1m0s for all nodes to be ready Nov 22 01:14:57.826: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Nov 22 01:14:57.849: INFO: Created pod: pod0-sched-preemption-low-priority Nov 22 01:14:57.868: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 01:15:17.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-90" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:80.216 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":17,"completed":10,"skipped":2577,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 01:15:17.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 22 01:15:17.994: INFO: Create a RollingUpdate DaemonSet Nov 22 01:15:17.998: INFO: Check that daemon pods launch on every node of the cluster Nov 22 01:15:18.004: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:18.004: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:18.004: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:18.009: INFO: Number of nodes with available pods: 0 Nov 22 01:15:18.009: INFO: Node node1 is running more than one daemon pod Nov 22 01:15:19.013: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:19.013: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:19.013: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:19.016: INFO: Number of nodes with available pods: 0 Nov 22 01:15:19.016: INFO: Node node1 is running more than one daemon pod Nov 22 01:15:20.014: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:20.015: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:20.015: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:20.018: INFO: Number of nodes with available pods: 0 Nov 22 01:15:20.018: INFO: Node node1 is running more than one daemon pod Nov 22 01:15:21.015: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:21.015: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:21.015: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:21.017: INFO: Number of nodes with available pods: 2 Nov 22 01:15:21.018: INFO: Number of running nodes: 2, number of available pods: 2 Nov 22 01:15:21.018: INFO: Update the DaemonSet to trigger a rollout Nov 22 01:15:21.024: INFO: Updating DaemonSet daemon-set Nov 22 01:15:34.039: INFO: Roll back the DaemonSet before rollout is complete Nov 22 01:15:34.046: INFO: Updating DaemonSet daemon-set Nov 22 01:15:34.046: INFO: Make sure DaemonSet rollback is complete Nov 22 01:15:34.050: INFO: Wrong image for pod: daemon-set-mrxv2. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Nov 22 01:15:34.050: INFO: Pod daemon-set-mrxv2 is not available Nov 22 01:15:34.054: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:34.054: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:34.054: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:35.064: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:35.064: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:35.064: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:36.062: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:36.063: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:36.063: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:37.067: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:37.067: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:37.067: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:38.065: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:38.065: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:38.066: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:39.065: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:39.065: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:39.065: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:40.064: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:40.064: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:40.064: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:41.064: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:41.064: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:41.064: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:42.064: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:42.064: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:42.064: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:43.065: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:43.065: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:43.065: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:44.062: INFO: Pod daemon-set-84v98 is not available Nov 22 01:15:44.068: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:44.068: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:44.068: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-667, will wait for the garbage collector to delete the pods Nov 22 01:15:44.131: INFO: Deleting DaemonSet.extensions daemon-set took: 3.996974ms Nov 22 01:15:44.231: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.644787ms Nov 22 01:15:53.435: INFO: Number of nodes with available pods: 0 Nov 22 01:15:53.435: INFO: Number of running nodes: 0, number of available pods: 0 Nov 22 01:15:53.437: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"58504"},"items":null} Nov 22 01:15:53.439: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"58504"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 01:15:53.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-667" for this suite. • [SLOW TEST:35.503 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":17,"completed":11,"skipped":2614,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 01:15:53.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Nov 22 01:15:53.504: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:53.504: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:53.504: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:53.506: INFO: Number of nodes with available pods: 0 Nov 22 01:15:53.506: INFO: Node node1 is running more than one daemon pod Nov 22 01:15:54.510: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:54.511: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:54.511: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:54.514: INFO: Number of nodes with available pods: 0 Nov 22 01:15:54.514: INFO: Node node1 is running more than one daemon pod Nov 22 01:15:55.513: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:55.513: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:55.513: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:55.516: INFO: Number of nodes with available pods: 0 Nov 22 01:15:55.516: INFO: Node node1 is running more than one daemon pod Nov 22 01:15:56.513: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:56.513: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:56.513: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:56.516: INFO: Number of nodes with available pods: 1 Nov 22 01:15:56.516: INFO: Node node1 is running more than one daemon pod Nov 22 01:15:57.511: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:57.511: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:57.511: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:57.514: INFO: Number of nodes with available pods: 2 Nov 22 01:15:57.514: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Nov 22 01:15:57.529: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:57.529: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:57.529: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:57.531: INFO: Number of nodes with available pods: 1 Nov 22 01:15:57.531: INFO: Node node1 is running more than one daemon pod Nov 22 01:15:58.538: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:58.538: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:58.538: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:58.541: INFO: Number of nodes with available pods: 1 Nov 22 01:15:58.541: INFO: Node node1 is running more than one daemon pod Nov 22 01:15:59.537: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:59.537: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:59.537: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:15:59.539: INFO: Number of nodes with available pods: 1 Nov 22 01:15:59.539: INFO: Node node1 is running more than one daemon pod Nov 22 01:16:00.540: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:00.540: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:00.540: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:00.543: INFO: Number of nodes with available pods: 1 Nov 22 01:16:00.543: INFO: Node node1 is running more than one daemon pod Nov 22 01:16:01.537: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:01.537: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:01.537: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:01.540: INFO: Number of nodes with available pods: 1 Nov 22 01:16:01.540: INFO: Node node1 is running more than one daemon pod Nov 22 01:16:02.537: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:02.537: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:02.537: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:02.540: INFO: Number of nodes with available pods: 1 Nov 22 01:16:02.540: INFO: Node node1 is running more than one daemon pod Nov 22 01:16:03.537: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:03.537: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:03.537: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:03.539: INFO: Number of nodes with available pods: 1 Nov 22 01:16:03.539: INFO: Node node1 is running more than one daemon pod Nov 22 01:16:04.536: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:04.536: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:04.536: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:04.538: INFO: Number of nodes with available pods: 1 Nov 22 01:16:04.538: INFO: Node node1 is running more than one daemon pod Nov 22 01:16:05.537: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:05.537: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:05.537: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:05.540: INFO: Number of nodes with available pods: 2 Nov 22 01:16:05.540: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2940, will wait for the garbage collector to delete the pods Nov 22 01:16:05.601: INFO: Deleting DaemonSet.extensions daemon-set took: 5.819455ms Nov 22 01:16:05.702: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.985601ms Nov 22 01:16:13.504: INFO: Number of nodes with available pods: 0 Nov 22 01:16:13.504: INFO: Number of running nodes: 0, number of available pods: 0 Nov 22 01:16:13.506: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"58654"},"items":null} Nov 22 01:16:13.509: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"58654"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 01:16:13.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2940" for this suite. • [SLOW TEST:20.071 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":17,"completed":12,"skipped":2664,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 01:16:13.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 22 01:16:13.552: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 22 01:16:13.561: INFO: Waiting for terminating namespaces to be deleted... Nov 22 01:16:13.563: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 22 01:16:13.572: INFO: cmk-7wvgm from kube-system started at 2021-11-21 22:38:17 +0000 UTC (2 container statuses recorded) Nov 22 01:16:13.573: INFO: Container nodereport ready: true, restart count 0 Nov 22 01:16:13.573: INFO: Container reconcile ready: true, restart count 0 Nov 22 01:16:13.573: INFO: cmk-init-discover-node1-brwt6 from kube-system started at 2021-11-21 22:37:36 +0000 UTC (3 container statuses recorded) Nov 22 01:16:13.573: INFO: Container discover ready: false, restart count 0 Nov 22 01:16:13.573: INFO: Container init ready: false, restart count 0 Nov 22 01:16:13.573: INFO: Container install ready: false, restart count 0 Nov 22 01:16:13.573: INFO: kube-flannel-cfzcv from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 01:16:13.573: INFO: Container kube-flannel ready: true, restart count 1 Nov 22 01:16:13.573: INFO: kube-multus-ds-amd64-wcr4n from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 01:16:13.573: INFO: Container kube-multus ready: true, restart count 1 Nov 22 01:16:13.573: INFO: kube-proxy-mb5cq from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 01:16:13.573: INFO: Container kube-proxy ready: true, restart count 1 Nov 22 01:16:13.573: INFO: kubernetes-metrics-scraper-5558854cb-kzhf7 from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 01:16:13.573: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 Nov 22 01:16:13.573: INFO: nginx-proxy-node1 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 01:16:13.573: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 01:16:13.573: INFO: node-feature-discovery-worker-lkpb8 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 01:16:13.573: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 01:16:13.573: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9xds6 from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 01:16:13.573: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 01:16:13.573: INFO: collectd-zmh78 from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 01:16:13.573: INFO: Container collectd ready: true, restart count 0 Nov 22 01:16:13.573: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 01:16:13.573: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 01:16:13.573: INFO: node-exporter-jj5rx from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 01:16:13.573: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 01:16:13.573: INFO: Container node-exporter ready: true, restart count 0 Nov 22 01:16:13.573: INFO: prometheus-k8s-0 from monitoring started at 2021-11-21 22:39:32 +0000 UTC (4 container statuses recorded) Nov 22 01:16:13.573: INFO: Container config-reloader ready: true, restart count 0 Nov 22 01:16:13.573: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 22 01:16:13.573: INFO: Container grafana ready: true, restart count 0 Nov 22 01:16:13.573: INFO: Container prometheus ready: true, restart count 1 Nov 22 01:16:13.573: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 22 01:16:13.590: INFO: cmk-init-discover-node2-8jdqf from kube-system started at 2021-11-21 22:37:56 +0000 UTC (3 container statuses recorded) Nov 22 01:16:13.590: INFO: Container discover ready: false, restart count 0 Nov 22 01:16:13.590: INFO: Container init ready: false, restart count 0 Nov 22 01:16:13.590: INFO: Container install ready: false, restart count 0 Nov 22 01:16:13.590: INFO: cmk-prx26 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (2 container statuses recorded) Nov 22 01:16:13.590: INFO: Container nodereport ready: true, restart count 0 Nov 22 01:16:13.590: INFO: Container reconcile ready: true, restart count 0 Nov 22 01:16:13.590: INFO: cmk-webhook-6c9d5f8578-8fxd8 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (1 container statuses recorded) Nov 22 01:16:13.590: INFO: Container cmk-webhook ready: true, restart count 0 Nov 22 01:16:13.590: INFO: kube-flannel-rdjt7 from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 01:16:13.590: INFO: Container kube-flannel ready: true, restart count 2 Nov 22 01:16:13.590: INFO: kube-multus-ds-amd64-6bg2m from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 01:16:13.590: INFO: Container kube-multus ready: true, restart count 1 Nov 22 01:16:13.590: INFO: kube-proxy-5xb56 from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 01:16:13.590: INFO: Container kube-proxy ready: true, restart count 2 Nov 22 01:16:13.590: INFO: kubernetes-dashboard-785dcbb76d-wrkrj from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 01:16:13.590: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 22 01:16:13.590: INFO: nginx-proxy-node2 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 01:16:13.590: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 01:16:13.590: INFO: node-feature-discovery-worker-slrp4 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 01:16:13.590: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 01:16:13.590: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9jdcq from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 01:16:13.590: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 01:16:13.590: INFO: collectd-6t47m from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 01:16:13.590: INFO: Container collectd ready: true, restart count 0 Nov 22 01:16:13.590: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 01:16:13.590: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 01:16:13.590: INFO: node-exporter-r2vkb from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 01:16:13.590: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 01:16:13.590: INFO: Container node-exporter ready: true, restart count 0 Nov 22 01:16:13.590: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q64pf from monitoring started at 2021-11-21 22:42:22 +0000 UTC (1 container statuses recorded) Nov 22 01:16:13.590: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-de270a88-d112-451c-ba18-812db0226159 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-de270a88-d112-451c-ba18-812db0226159 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-de270a88-d112-451c-ba18-812db0226159 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 01:16:21.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4378" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.139 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":17,"completed":13,"skipped":2742,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 01:16:21.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 01:16:52.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2643" for this suite. STEP: Destroying namespace "nsdeletetest-1784" for this suite. Nov 22 01:16:52.773: INFO: Namespace nsdeletetest-1784 was already deleted STEP: Destroying namespace "nsdeletetest-693" for this suite. • [SLOW TEST:31.095 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":17,"completed":14,"skipped":3547,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 01:16:52.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 22 01:16:52.803: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 22 01:16:52.812: INFO: Waiting for terminating namespaces to be deleted... Nov 22 01:16:52.814: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 22 01:16:52.824: INFO: cmk-7wvgm from kube-system started at 2021-11-21 22:38:17 +0000 UTC (2 container statuses recorded) Nov 22 01:16:52.824: INFO: Container nodereport ready: true, restart count 0 Nov 22 01:16:52.824: INFO: Container reconcile ready: true, restart count 0 Nov 22 01:16:52.824: INFO: cmk-init-discover-node1-brwt6 from kube-system started at 2021-11-21 22:37:36 +0000 UTC (3 container statuses recorded) Nov 22 01:16:52.824: INFO: Container discover ready: false, restart count 0 Nov 22 01:16:52.824: INFO: Container init ready: false, restart count 0 Nov 22 01:16:52.824: INFO: Container install ready: false, restart count 0 Nov 22 01:16:52.824: INFO: kube-flannel-cfzcv from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 01:16:52.824: INFO: Container kube-flannel ready: true, restart count 1 Nov 22 01:16:52.824: INFO: kube-multus-ds-amd64-wcr4n from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 01:16:52.824: INFO: Container kube-multus ready: true, restart count 1 Nov 22 01:16:52.824: INFO: kube-proxy-mb5cq from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 01:16:52.824: INFO: Container kube-proxy ready: true, restart count 1 Nov 22 01:16:52.824: INFO: kubernetes-metrics-scraper-5558854cb-kzhf7 from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 01:16:52.824: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 Nov 22 01:16:52.824: INFO: nginx-proxy-node1 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 01:16:52.824: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 01:16:52.824: INFO: node-feature-discovery-worker-lkpb8 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 01:16:52.824: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 01:16:52.824: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9xds6 from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 01:16:52.824: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 01:16:52.824: INFO: collectd-zmh78 from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 01:16:52.824: INFO: Container collectd ready: true, restart count 0 Nov 22 01:16:52.824: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 01:16:52.824: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 01:16:52.824: INFO: node-exporter-jj5rx from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 01:16:52.824: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 01:16:52.824: INFO: Container node-exporter ready: true, restart count 0 Nov 22 01:16:52.824: INFO: prometheus-k8s-0 from monitoring started at 2021-11-21 22:39:32 +0000 UTC (4 container statuses recorded) Nov 22 01:16:52.824: INFO: Container config-reloader ready: true, restart count 0 Nov 22 01:16:52.824: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 22 01:16:52.824: INFO: Container grafana ready: true, restart count 0 Nov 22 01:16:52.824: INFO: Container prometheus ready: true, restart count 1 Nov 22 01:16:52.824: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 22 01:16:52.835: INFO: cmk-init-discover-node2-8jdqf from kube-system started at 2021-11-21 22:37:56 +0000 UTC (3 container statuses recorded) Nov 22 01:16:52.835: INFO: Container discover ready: false, restart count 0 Nov 22 01:16:52.835: INFO: Container init ready: false, restart count 0 Nov 22 01:16:52.835: INFO: Container install ready: false, restart count 0 Nov 22 01:16:52.835: INFO: cmk-prx26 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (2 container statuses recorded) Nov 22 01:16:52.835: INFO: Container nodereport ready: true, restart count 0 Nov 22 01:16:52.835: INFO: Container reconcile ready: true, restart count 0 Nov 22 01:16:52.835: INFO: cmk-webhook-6c9d5f8578-8fxd8 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (1 container statuses recorded) Nov 22 01:16:52.835: INFO: Container cmk-webhook ready: true, restart count 0 Nov 22 01:16:52.835: INFO: kube-flannel-rdjt7 from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 01:16:52.835: INFO: Container kube-flannel ready: true, restart count 2 Nov 22 01:16:52.835: INFO: kube-multus-ds-amd64-6bg2m from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 01:16:52.835: INFO: Container kube-multus ready: true, restart count 1 Nov 22 01:16:52.835: INFO: kube-proxy-5xb56 from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 01:16:52.835: INFO: Container kube-proxy ready: true, restart count 2 Nov 22 01:16:52.835: INFO: kubernetes-dashboard-785dcbb76d-wrkrj from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 01:16:52.835: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 22 01:16:52.835: INFO: nginx-proxy-node2 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 01:16:52.835: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 01:16:52.835: INFO: node-feature-discovery-worker-slrp4 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 01:16:52.835: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 01:16:52.835: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9jdcq from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 01:16:52.835: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 01:16:52.835: INFO: collectd-6t47m from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 01:16:52.835: INFO: Container collectd ready: true, restart count 0 Nov 22 01:16:52.835: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 01:16:52.835: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 01:16:52.835: INFO: node-exporter-r2vkb from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 01:16:52.835: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 01:16:52.836: INFO: Container node-exporter ready: true, restart count 0 Nov 22 01:16:52.836: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q64pf from monitoring started at 2021-11-21 22:42:22 +0000 UTC (1 container statuses recorded) Nov 22 01:16:52.836: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16b9b96bf6118aec], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 01:16:53.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2615" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":17,"completed":15,"skipped":3985,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 01:16:53.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 22 01:16:53.921: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Nov 22 01:16:53.928: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:53.928: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:53.928: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:53.930: INFO: Number of nodes with available pods: 0 Nov 22 01:16:53.930: INFO: Node node1 is running more than one daemon pod Nov 22 01:16:54.935: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:54.936: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:54.936: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:54.938: INFO: Number of nodes with available pods: 0 Nov 22 01:16:54.938: INFO: Node node1 is running more than one daemon pod Nov 22 01:16:55.936: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:55.936: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:55.936: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:55.938: INFO: Number of nodes with available pods: 0 Nov 22 01:16:55.938: INFO: Node node1 is running more than one daemon pod Nov 22 01:16:56.939: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:56.939: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:56.939: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:56.942: INFO: Number of nodes with available pods: 2 Nov 22 01:16:56.942: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Nov 22 01:16:56.963: INFO: Wrong image for pod: daemon-set-2xq79. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 22 01:16:56.963: INFO: Wrong image for pod: daemon-set-vndkr. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 22 01:16:56.967: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:56.967: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:56.967: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:57.971: INFO: Wrong image for pod: daemon-set-2xq79. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 22 01:16:57.975: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:57.975: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:57.975: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:58.973: INFO: Wrong image for pod: daemon-set-2xq79. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 22 01:16:58.977: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:58.977: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:58.977: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:59.971: INFO: Wrong image for pod: daemon-set-2xq79. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 22 01:16:59.975: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:59.976: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:16:59.976: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:00.970: INFO: Wrong image for pod: daemon-set-2xq79. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 22 01:17:00.974: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:00.974: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:00.974: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:01.973: INFO: Wrong image for pod: daemon-set-2xq79. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 22 01:17:01.977: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:01.977: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:01.977: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:02.971: INFO: Wrong image for pod: daemon-set-2xq79. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 22 01:17:02.977: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:02.977: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:02.977: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:03.971: INFO: Wrong image for pod: daemon-set-2xq79. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 22 01:17:03.974: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:03.974: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:03.974: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:04.972: INFO: Wrong image for pod: daemon-set-2xq79. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 22 01:17:04.972: INFO: Pod daemon-set-wp4b4 is not available Nov 22 01:17:04.976: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:04.977: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:04.977: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:05.970: INFO: Wrong image for pod: daemon-set-2xq79. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 22 01:17:05.970: INFO: Pod daemon-set-wp4b4 is not available Nov 22 01:17:05.975: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:05.975: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:05.975: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:06.973: INFO: Wrong image for pod: daemon-set-2xq79. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 22 01:17:06.973: INFO: Pod daemon-set-wp4b4 is not available Nov 22 01:17:06.977: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:06.977: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:06.977: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:07.977: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:07.978: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:07.978: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:08.975: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:08.975: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:08.975: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:09.977: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:09.977: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:09.977: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:10.971: INFO: Pod daemon-set-9zflk is not available Nov 22 01:17:10.976: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:10.976: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:10.976: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Nov 22 01:17:10.980: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:10.980: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:10.980: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:10.983: INFO: Number of nodes with available pods: 1 Nov 22 01:17:10.983: INFO: Node node2 is running more than one daemon pod Nov 22 01:17:11.992: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:11.992: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:11.992: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:11.995: INFO: Number of nodes with available pods: 1 Nov 22 01:17:11.995: INFO: Node node2 is running more than one daemon pod Nov 22 01:17:12.990: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:12.990: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:12.990: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 01:17:12.994: INFO: Number of nodes with available pods: 2 Nov 22 01:17:12.994: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4654, will wait for the garbage collector to delete the pods Nov 22 01:17:13.070: INFO: Deleting DaemonSet.extensions daemon-set took: 7.082049ms Nov 22 01:17:13.170: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.321713ms Nov 22 01:17:23.474: INFO: Number of nodes with available pods: 0 Nov 22 01:17:23.474: INFO: Number of running nodes: 0, number of available pods: 0 Nov 22 01:17:23.476: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"59067"},"items":null} Nov 22 01:17:23.478: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"59067"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 01:17:23.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4654" for this suite. • [SLOW TEST:29.614 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":17,"completed":16,"skipped":4188,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 01:17:23.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 01:17:29.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3985" for this suite. STEP: Destroying namespace "nsdeletetest-1472" for this suite. Nov 22 01:17:29.611: INFO: Namespace nsdeletetest-1472 was already deleted STEP: Destroying namespace "nsdeletetest-5372" for this suite. • [SLOW TEST:6.100 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":17,"completed":17,"skipped":5476,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSNov 22 01:17:29.620: INFO: Running AfterSuite actions on all nodes Nov 22 01:17:29.620: INFO: Running AfterSuite actions on node 1 Nov 22 01:17:29.620: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":17,"skipped":5753,"failed":0} Ran 17 of 5770 Specs in 889.011 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5753 Skipped PASS Ginkgo ran 1 suite in 14m50.366148786s Test Suite Passed