I1105 23:40:05.843613 23 e2e.go:129] Starting e2e run "a9243555-e9b4-46dd-a233-7bd30dcb06c5" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1636155604 - Will randomize all specs Will run 17 of 5770 specs Nov 5 23:40:05.902: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:40:05.907: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 5 23:40:05.934: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 5 23:40:05.998: INFO: The status of Pod cmk-init-discover-node1-nnkks is Succeeded, skipping waiting Nov 5 23:40:05.998: INFO: The status of Pod cmk-init-discover-node2-9svdd is Succeeded, skipping waiting Nov 5 23:40:05.998: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 5 23:40:05.998: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 5 23:40:05.998: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 5 23:40:06.015: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Nov 5 23:40:06.015: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Nov 5 23:40:06.015: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Nov 5 23:40:06.015: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Nov 5 23:40:06.015: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Nov 5 23:40:06.015: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Nov 5 23:40:06.015: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Nov 5 23:40:06.015: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 5 23:40:06.015: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Nov 5 23:40:06.015: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Nov 5 23:40:06.015: INFO: e2e test version: v1.21.5 Nov 5 23:40:06.016: INFO: kube-apiserver version: v1.21.1 Nov 5 23:40:06.016: INFO: >>> kubeConfig: /root/.kube/config Nov 5 23:40:06.021: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:40:06.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption W1105 23:40:06.070986 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 5 23:40:06.072: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 5 23:40:06.075: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Nov 5 23:40:06.093: INFO: Waiting up to 1m0s for all nodes to be ready Nov 5 23:41:06.158: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:41:06.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:41:06.206: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Nov 5 23:41:06.211: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:41:06.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-8553" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:41:06.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9573" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.232 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":17,"completed":1,"skipped":2050,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:41:06.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:41:06.310: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Nov 5 23:41:06.317: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:06.317: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:06.317: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:06.320: INFO: Number of nodes with available pods: 0 Nov 5 23:41:06.320: INFO: Node node1 is running more than one daemon pod Nov 5 23:41:07.325: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:07.325: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:07.325: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:07.327: INFO: Number of nodes with available pods: 0 Nov 5 23:41:07.327: INFO: Node node1 is running more than one daemon pod Nov 5 23:41:08.326: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:08.326: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:08.326: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:08.328: INFO: Number of nodes with available pods: 0 Nov 5 23:41:08.328: INFO: Node node1 is running more than one daemon pod Nov 5 23:41:09.324: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:09.324: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:09.324: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:09.327: INFO: Number of nodes with available pods: 0 Nov 5 23:41:09.327: INFO: Node node1 is running more than one daemon pod Nov 5 23:41:10.327: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:10.327: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:10.327: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:10.330: INFO: Number of nodes with available pods: 2 Nov 5 23:41:10.330: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Nov 5 23:41:10.358: INFO: Wrong image for pod: daemon-set-956kb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 5 23:41:10.358: INFO: Wrong image for pod: daemon-set-n72qg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 5 23:41:10.362: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:10.362: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:10.362: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:11.366: INFO: Wrong image for pod: daemon-set-956kb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 5 23:41:11.370: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:11.370: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:11.371: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:12.366: INFO: Wrong image for pod: daemon-set-956kb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 5 23:41:12.371: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:12.371: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:12.371: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:13.367: INFO: Wrong image for pod: daemon-set-956kb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 5 23:41:13.371: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:13.371: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:13.371: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:14.367: INFO: Wrong image for pod: daemon-set-956kb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 5 23:41:14.371: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:14.372: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:14.372: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:15.367: INFO: Wrong image for pod: daemon-set-956kb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 5 23:41:15.370: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:15.370: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:15.370: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:16.367: INFO: Wrong image for pod: daemon-set-956kb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 5 23:41:16.371: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:16.371: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:16.371: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:17.366: INFO: Wrong image for pod: daemon-set-956kb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 5 23:41:17.371: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:17.371: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:17.371: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:18.368: INFO: Wrong image for pod: daemon-set-956kb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 5 23:41:18.372: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:18.372: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:18.372: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:19.366: INFO: Wrong image for pod: daemon-set-956kb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 5 23:41:19.366: INFO: Pod daemon-set-mffct is not available Nov 5 23:41:19.372: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:19.372: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:19.373: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:20.366: INFO: Wrong image for pod: daemon-set-956kb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 5 23:41:20.366: INFO: Pod daemon-set-mffct is not available Nov 5 23:41:20.370: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:20.370: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:20.371: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:21.367: INFO: Wrong image for pod: daemon-set-956kb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Nov 5 23:41:21.367: INFO: Pod daemon-set-mffct is not available Nov 5 23:41:21.371: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:21.371: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:21.371: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:22.371: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:22.371: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:22.371: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:23.369: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:23.370: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:23.370: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:24.372: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:24.373: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:24.373: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:25.370: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:25.370: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:25.370: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:26.370: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:26.370: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:26.370: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:27.370: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:27.370: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:27.370: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:28.370: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:28.370: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:28.370: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:29.366: INFO: Pod daemon-set-scql8 is not available Nov 5 23:41:29.370: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:29.370: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:29.370: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Nov 5 23:41:29.376: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:29.376: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:29.376: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:29.379: INFO: Number of nodes with available pods: 1 Nov 5 23:41:29.379: INFO: Node node2 is running more than one daemon pod Nov 5 23:41:30.386: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:30.386: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:30.386: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:30.389: INFO: Number of nodes with available pods: 1 Nov 5 23:41:30.389: INFO: Node node2 is running more than one daemon pod Nov 5 23:41:31.384: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:31.384: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:31.384: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:31.389: INFO: Number of nodes with available pods: 1 Nov 5 23:41:31.389: INFO: Node node2 is running more than one daemon pod Nov 5 23:41:32.386: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:32.386: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:32.386: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:32.389: INFO: Number of nodes with available pods: 2 Nov 5 23:41:32.389: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2559, will wait for the garbage collector to delete the pods Nov 5 23:41:32.461: INFO: Deleting DaemonSet.extensions daemon-set took: 4.311523ms Nov 5 23:41:32.563: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.199877ms Nov 5 23:41:38.766: INFO: Number of nodes with available pods: 0 Nov 5 23:41:38.766: INFO: Number of running nodes: 0, number of available pods: 0 Nov 5 23:41:38.768: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"55754"},"items":null} Nov 5 23:41:38.771: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"55754"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:41:38.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2559" for this suite. • [SLOW TEST:32.513 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":17,"completed":2,"skipped":2128,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:41:38.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:41:38.848: INFO: Create a RollingUpdate DaemonSet Nov 5 23:41:38.856: INFO: Check that daemon pods launch on every node of the cluster Nov 5 23:41:38.862: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:38.862: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:38.862: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:38.869: INFO: Number of nodes with available pods: 0 Nov 5 23:41:38.869: INFO: Node node1 is running more than one daemon pod Nov 5 23:41:39.875: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:39.875: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:39.875: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:39.878: INFO: Number of nodes with available pods: 0 Nov 5 23:41:39.878: INFO: Node node1 is running more than one daemon pod Nov 5 23:41:40.875: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:40.876: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:40.876: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:40.878: INFO: Number of nodes with available pods: 0 Nov 5 23:41:40.878: INFO: Node node1 is running more than one daemon pod Nov 5 23:41:41.874: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:41.874: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:41.874: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:41.877: INFO: Number of nodes with available pods: 2 Nov 5 23:41:41.877: INFO: Number of running nodes: 2, number of available pods: 2 Nov 5 23:41:41.877: INFO: Update the DaemonSet to trigger a rollout Nov 5 23:41:41.884: INFO: Updating DaemonSet daemon-set Nov 5 23:41:45.897: INFO: Roll back the DaemonSet before rollout is complete Nov 5 23:41:45.904: INFO: Updating DaemonSet daemon-set Nov 5 23:41:45.904: INFO: Make sure DaemonSet rollback is complete Nov 5 23:41:45.906: INFO: Wrong image for pod: daemon-set-5cx2d. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Nov 5 23:41:45.906: INFO: Pod daemon-set-5cx2d is not available Nov 5 23:41:45.910: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:45.910: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:45.910: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:46.919: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:46.919: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:46.919: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:47.918: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:47.918: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:47.918: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:48.918: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:48.918: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:48.918: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:49.919: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:49.919: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:49.919: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:50.919: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:50.919: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:50.919: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:51.921: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:51.921: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:51.921: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:52.920: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:52.920: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:52.920: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:53.919: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:53.919: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:53.919: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:54.919: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:54.919: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:54.919: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:55.917: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:55.917: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:55.917: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:56.919: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:56.919: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:56.919: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:57.918: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:57.918: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:57.919: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:58.914: INFO: Pod daemon-set-zckzr is not available Nov 5 23:41:58.918: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:58.918: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:41:58.918: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5526, will wait for the garbage collector to delete the pods Nov 5 23:41:58.980: INFO: Deleting DaemonSet.extensions daemon-set took: 4.117273ms Nov 5 23:41:59.081: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.788779ms Nov 5 23:42:08.784: INFO: Number of nodes with available pods: 0 Nov 5 23:42:08.784: INFO: Number of running nodes: 0, number of available pods: 0 Nov 5 23:42:08.787: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"55946"},"items":null} Nov 5 23:42:08.790: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"55946"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:42:08.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5526" for this suite. • [SLOW TEST:30.012 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":17,"completed":3,"skipped":2411,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:42:08.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:42:39.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5077" for this suite. STEP: Destroying namespace "nsdeletetest-6656" for this suite. Nov 5 23:42:39.919: INFO: Namespace nsdeletetest-6656 was already deleted STEP: Destroying namespace "nsdeletetest-137" for this suite. • [SLOW TEST:31.121 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":17,"completed":4,"skipped":2612,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:42:39.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Nov 5 23:42:39.980: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:39.980: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:39.980: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:39.981: INFO: Number of nodes with available pods: 0 Nov 5 23:42:39.981: INFO: Node node1 is running more than one daemon pod Nov 5 23:42:40.986: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:40.986: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:40.986: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:40.990: INFO: Number of nodes with available pods: 0 Nov 5 23:42:40.990: INFO: Node node1 is running more than one daemon pod Nov 5 23:42:41.988: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:41.988: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:41.988: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:41.991: INFO: Number of nodes with available pods: 0 Nov 5 23:42:41.991: INFO: Node node1 is running more than one daemon pod Nov 5 23:42:42.989: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:42.989: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:42.989: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:42.992: INFO: Number of nodes with available pods: 1 Nov 5 23:42:42.992: INFO: Node node1 is running more than one daemon pod Nov 5 23:42:43.987: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:43.987: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:43.987: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:43.990: INFO: Number of nodes with available pods: 2 Nov 5 23:42:43.990: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Nov 5 23:42:44.005: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:44.005: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:44.005: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:44.008: INFO: Number of nodes with available pods: 1 Nov 5 23:42:44.008: INFO: Node node2 is running more than one daemon pod Nov 5 23:42:45.012: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:45.012: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:45.012: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:45.015: INFO: Number of nodes with available pods: 1 Nov 5 23:42:45.015: INFO: Node node2 is running more than one daemon pod Nov 5 23:42:46.014: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:46.014: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:46.014: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:46.017: INFO: Number of nodes with available pods: 1 Nov 5 23:42:46.017: INFO: Node node2 is running more than one daemon pod Nov 5 23:42:47.013: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:47.013: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:47.013: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:42:47.016: INFO: Number of nodes with available pods: 2 Nov 5 23:42:47.016: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6665, will wait for the garbage collector to delete the pods Nov 5 23:42:47.079: INFO: Deleting DaemonSet.extensions daemon-set took: 5.260261ms Nov 5 23:42:47.180: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.702052ms Nov 5 23:42:50.982: INFO: Number of nodes with available pods: 0 Nov 5 23:42:50.982: INFO: Number of running nodes: 0, number of available pods: 0 Nov 5 23:42:50.984: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"56205"},"items":null} Nov 5 23:42:50.986: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"56205"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:42:50.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6665" for this suite. • [SLOW TEST:11.070 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":17,"completed":5,"skipped":2692,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:42:51.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 5 23:42:51.031: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 5 23:42:51.039: INFO: Waiting for terminating namespaces to be deleted... Nov 5 23:42:51.041: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 5 23:42:51.051: INFO: cmk-cfm9r from kube-system started at 2021-11-05 21:13:47 +0000 UTC (2 container statuses recorded) Nov 5 23:42:51.051: INFO: Container nodereport ready: true, restart count 0 Nov 5 23:42:51.051: INFO: Container reconcile ready: true, restart count 0 Nov 5 23:42:51.051: INFO: cmk-init-discover-node1-nnkks from kube-system started at 2021-11-05 21:13:04 +0000 UTC (3 container statuses recorded) Nov 5 23:42:51.051: INFO: Container discover ready: false, restart count 0 Nov 5 23:42:51.051: INFO: Container init ready: false, restart count 0 Nov 5 23:42:51.051: INFO: Container install ready: false, restart count 0 Nov 5 23:42:51.051: INFO: cmk-webhook-6c9d5f8578-wq5mk from kube-system started at 2021-11-05 21:13:47 +0000 UTC (1 container statuses recorded) Nov 5 23:42:51.051: INFO: Container cmk-webhook ready: true, restart count 0 Nov 5 23:42:51.051: INFO: kube-flannel-hxwks from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 5 23:42:51.051: INFO: Container kube-flannel ready: true, restart count 3 Nov 5 23:42:51.051: INFO: kube-multus-ds-amd64-mqrl8 from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 5 23:42:51.051: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:42:51.051: INFO: kube-proxy-mc4cs from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 5 23:42:51.051: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:42:51.051: INFO: kubernetes-dashboard-785dcbb76d-9wtdz from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 5 23:42:51.051: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 5 23:42:51.051: INFO: nginx-proxy-node1 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 5 23:42:51.051: INFO: Container nginx-proxy ready: true, restart count 2 Nov 5 23:42:51.051: INFO: node-feature-discovery-worker-spmbf from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 5 23:42:51.051: INFO: Container nfd-worker ready: true, restart count 0 Nov 5 23:42:51.051: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 5 23:42:51.051: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 5 23:42:51.051: INFO: collectd-5k6s9 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 5 23:42:51.051: INFO: Container collectd ready: true, restart count 0 Nov 5 23:42:51.051: INFO: Container collectd-exporter ready: true, restart count 0 Nov 5 23:42:51.051: INFO: Container rbac-proxy ready: true, restart count 0 Nov 5 23:42:51.051: INFO: node-exporter-fvksz from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 5 23:42:51.051: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:42:51.052: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:42:51.052: INFO: prometheus-k8s-0 from monitoring started at 2021-11-05 21:14:58 +0000 UTC (4 container statuses recorded) Nov 5 23:42:51.052: INFO: Container config-reloader ready: true, restart count 0 Nov 5 23:42:51.052: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 5 23:42:51.052: INFO: Container grafana ready: true, restart count 0 Nov 5 23:42:51.052: INFO: Container prometheus ready: true, restart count 1 Nov 5 23:42:51.052: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s from monitoring started at 2021-11-05 21:17:51 +0000 UTC (1 container statuses recorded) Nov 5 23:42:51.052: INFO: Container tas-extender ready: true, restart count 0 Nov 5 23:42:51.052: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 5 23:42:51.062: INFO: cmk-bnvd2 from kube-system started at 2021-11-05 21:13:46 +0000 UTC (2 container statuses recorded) Nov 5 23:42:51.062: INFO: Container nodereport ready: true, restart count 0 Nov 5 23:42:51.062: INFO: Container reconcile ready: true, restart count 0 Nov 5 23:42:51.062: INFO: cmk-init-discover-node2-9svdd from kube-system started at 2021-11-05 21:13:24 +0000 UTC (3 container statuses recorded) Nov 5 23:42:51.062: INFO: Container discover ready: false, restart count 0 Nov 5 23:42:51.062: INFO: Container init ready: false, restart count 0 Nov 5 23:42:51.062: INFO: Container install ready: false, restart count 0 Nov 5 23:42:51.062: INFO: kube-flannel-cqj7j from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 5 23:42:51.062: INFO: Container kube-flannel ready: true, restart count 2 Nov 5 23:42:51.062: INFO: kube-multus-ds-amd64-p7bxx from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 5 23:42:51.062: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:42:51.062: INFO: kube-proxy-j9lmg from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 5 23:42:51.062: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:42:51.062: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 5 23:42:51.062: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 5 23:42:51.062: INFO: nginx-proxy-node2 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 5 23:42:51.062: INFO: Container nginx-proxy ready: true, restart count 2 Nov 5 23:42:51.062: INFO: node-feature-discovery-worker-pn6cr from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 5 23:42:51.062: INFO: Container nfd-worker ready: true, restart count 0 Nov 5 23:42:51.062: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 5 23:42:51.062: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 5 23:42:51.062: INFO: collectd-r2g57 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 5 23:42:51.062: INFO: Container collectd ready: true, restart count 0 Nov 5 23:42:51.062: INFO: Container collectd-exporter ready: true, restart count 0 Nov 5 23:42:51.062: INFO: Container rbac-proxy ready: true, restart count 0 Nov 5 23:42:51.062: INFO: node-exporter-k7p79 from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 5 23:42:51.062: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:42:51.062: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:42:51.062: INFO: prometheus-operator-585ccfb458-vh55q from monitoring started at 2021-11-05 21:14:41 +0000 UTC (2 container statuses recorded) Nov 5 23:42:51.062: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:42:51.062: INFO: Container prometheus-operator ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16b4cb014d2d2699], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match Pod's node affinity/selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:42:52.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-225" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":17,"completed":6,"skipped":2965,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:42:52.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:42:58.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7275" for this suite. STEP: Destroying namespace "nsdeletetest-9415" for this suite. Nov 5 23:42:58.207: INFO: Namespace nsdeletetest-9415 was already deleted STEP: Destroying namespace "nsdeletetest-727" for this suite. • [SLOW TEST:6.098 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":17,"completed":7,"skipped":3205,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:42:58.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 5 23:42:58.238: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 5 23:42:58.246: INFO: Waiting for terminating namespaces to be deleted... Nov 5 23:42:58.248: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 5 23:42:58.258: INFO: cmk-cfm9r from kube-system started at 2021-11-05 21:13:47 +0000 UTC (2 container statuses recorded) Nov 5 23:42:58.258: INFO: Container nodereport ready: true, restart count 0 Nov 5 23:42:58.258: INFO: Container reconcile ready: true, restart count 0 Nov 5 23:42:58.258: INFO: cmk-init-discover-node1-nnkks from kube-system started at 2021-11-05 21:13:04 +0000 UTC (3 container statuses recorded) Nov 5 23:42:58.258: INFO: Container discover ready: false, restart count 0 Nov 5 23:42:58.258: INFO: Container init ready: false, restart count 0 Nov 5 23:42:58.258: INFO: Container install ready: false, restart count 0 Nov 5 23:42:58.258: INFO: cmk-webhook-6c9d5f8578-wq5mk from kube-system started at 2021-11-05 21:13:47 +0000 UTC (1 container statuses recorded) Nov 5 23:42:58.258: INFO: Container cmk-webhook ready: true, restart count 0 Nov 5 23:42:58.258: INFO: kube-flannel-hxwks from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 5 23:42:58.258: INFO: Container kube-flannel ready: true, restart count 3 Nov 5 23:42:58.258: INFO: kube-multus-ds-amd64-mqrl8 from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 5 23:42:58.258: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:42:58.258: INFO: kube-proxy-mc4cs from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 5 23:42:58.258: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:42:58.258: INFO: kubernetes-dashboard-785dcbb76d-9wtdz from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 5 23:42:58.258: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 5 23:42:58.258: INFO: nginx-proxy-node1 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 5 23:42:58.258: INFO: Container nginx-proxy ready: true, restart count 2 Nov 5 23:42:58.258: INFO: node-feature-discovery-worker-spmbf from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 5 23:42:58.258: INFO: Container nfd-worker ready: true, restart count 0 Nov 5 23:42:58.258: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 5 23:42:58.258: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 5 23:42:58.258: INFO: collectd-5k6s9 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 5 23:42:58.258: INFO: Container collectd ready: true, restart count 0 Nov 5 23:42:58.258: INFO: Container collectd-exporter ready: true, restart count 0 Nov 5 23:42:58.258: INFO: Container rbac-proxy ready: true, restart count 0 Nov 5 23:42:58.258: INFO: node-exporter-fvksz from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 5 23:42:58.258: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:42:58.258: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:42:58.258: INFO: prometheus-k8s-0 from monitoring started at 2021-11-05 21:14:58 +0000 UTC (4 container statuses recorded) Nov 5 23:42:58.258: INFO: Container config-reloader ready: true, restart count 0 Nov 5 23:42:58.258: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 5 23:42:58.258: INFO: Container grafana ready: true, restart count 0 Nov 5 23:42:58.258: INFO: Container prometheus ready: true, restart count 1 Nov 5 23:42:58.258: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s from monitoring started at 2021-11-05 21:17:51 +0000 UTC (1 container statuses recorded) Nov 5 23:42:58.258: INFO: Container tas-extender ready: true, restart count 0 Nov 5 23:42:58.258: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 5 23:42:58.265: INFO: cmk-bnvd2 from kube-system started at 2021-11-05 21:13:46 +0000 UTC (2 container statuses recorded) Nov 5 23:42:58.265: INFO: Container nodereport ready: true, restart count 0 Nov 5 23:42:58.265: INFO: Container reconcile ready: true, restart count 0 Nov 5 23:42:58.265: INFO: cmk-init-discover-node2-9svdd from kube-system started at 2021-11-05 21:13:24 +0000 UTC (3 container statuses recorded) Nov 5 23:42:58.265: INFO: Container discover ready: false, restart count 0 Nov 5 23:42:58.265: INFO: Container init ready: false, restart count 0 Nov 5 23:42:58.265: INFO: Container install ready: false, restart count 0 Nov 5 23:42:58.265: INFO: kube-flannel-cqj7j from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 5 23:42:58.265: INFO: Container kube-flannel ready: true, restart count 2 Nov 5 23:42:58.265: INFO: kube-multus-ds-amd64-p7bxx from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 5 23:42:58.265: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:42:58.265: INFO: kube-proxy-j9lmg from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 5 23:42:58.265: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:42:58.265: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 5 23:42:58.265: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 5 23:42:58.265: INFO: nginx-proxy-node2 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 5 23:42:58.265: INFO: Container nginx-proxy ready: true, restart count 2 Nov 5 23:42:58.265: INFO: node-feature-discovery-worker-pn6cr from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 5 23:42:58.265: INFO: Container nfd-worker ready: true, restart count 0 Nov 5 23:42:58.265: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 5 23:42:58.265: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 5 23:42:58.265: INFO: collectd-r2g57 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 5 23:42:58.265: INFO: Container collectd ready: true, restart count 0 Nov 5 23:42:58.265: INFO: Container collectd-exporter ready: true, restart count 0 Nov 5 23:42:58.265: INFO: Container rbac-proxy ready: true, restart count 0 Nov 5 23:42:58.265: INFO: node-exporter-k7p79 from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 5 23:42:58.265: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:42:58.265: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:42:58.265: INFO: prometheus-operator-585ccfb458-vh55q from monitoring started at 2021-11-05 21:14:41 +0000 UTC (2 container statuses recorded) Nov 5 23:42:58.265: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:42:58.265: INFO: Container prometheus-operator ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: verifying the node has the label node node1 STEP: verifying the node has the label node node2 Nov 5 23:42:58.320: INFO: Pod cmk-bnvd2 requesting resource cpu=0m on Node node2 Nov 5 23:42:58.320: INFO: Pod cmk-cfm9r requesting resource cpu=0m on Node node1 Nov 5 23:42:58.320: INFO: Pod cmk-webhook-6c9d5f8578-wq5mk requesting resource cpu=0m on Node node1 Nov 5 23:42:58.320: INFO: Pod kube-flannel-cqj7j requesting resource cpu=150m on Node node2 Nov 5 23:42:58.320: INFO: Pod kube-flannel-hxwks requesting resource cpu=150m on Node node1 Nov 5 23:42:58.320: INFO: Pod kube-multus-ds-amd64-mqrl8 requesting resource cpu=100m on Node node1 Nov 5 23:42:58.320: INFO: Pod kube-multus-ds-amd64-p7bxx requesting resource cpu=100m on Node node2 Nov 5 23:42:58.320: INFO: Pod kube-proxy-j9lmg requesting resource cpu=0m on Node node2 Nov 5 23:42:58.321: INFO: Pod kube-proxy-mc4cs requesting resource cpu=0m on Node node1 Nov 5 23:42:58.321: INFO: Pod kubernetes-dashboard-785dcbb76d-9wtdz requesting resource cpu=50m on Node node1 Nov 5 23:42:58.321: INFO: Pod kubernetes-metrics-scraper-5558854cb-v9vgg requesting resource cpu=0m on Node node2 Nov 5 23:42:58.321: INFO: Pod nginx-proxy-node1 requesting resource cpu=25m on Node node1 Nov 5 23:42:58.321: INFO: Pod nginx-proxy-node2 requesting resource cpu=25m on Node node2 Nov 5 23:42:58.321: INFO: Pod node-feature-discovery-worker-pn6cr requesting resource cpu=0m on Node node2 Nov 5 23:42:58.321: INFO: Pod node-feature-discovery-worker-spmbf requesting resource cpu=0m on Node node1 Nov 5 23:42:58.321: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn requesting resource cpu=0m on Node node1 Nov 5 23:42:58.321: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p requesting resource cpu=0m on Node node2 Nov 5 23:42:58.321: INFO: Pod collectd-5k6s9 requesting resource cpu=0m on Node node1 Nov 5 23:42:58.321: INFO: Pod collectd-r2g57 requesting resource cpu=0m on Node node2 Nov 5 23:42:58.321: INFO: Pod node-exporter-fvksz requesting resource cpu=112m on Node node1 Nov 5 23:42:58.321: INFO: Pod node-exporter-k7p79 requesting resource cpu=112m on Node node2 Nov 5 23:42:58.321: INFO: Pod prometheus-k8s-0 requesting resource cpu=200m on Node node1 Nov 5 23:42:58.321: INFO: Pod prometheus-operator-585ccfb458-vh55q requesting resource cpu=100m on Node node2 Nov 5 23:42:58.321: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-qbp7s requesting resource cpu=0m on Node node1 STEP: Starting Pods to consume most of the cluster CPU. Nov 5 23:42:58.321: INFO: Creating a pod which consumes cpu=53454m on Node node1 Nov 5 23:42:58.332: INFO: Creating a pod which consumes cpu=53559m on Node node2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-2045f586-b2b5-4c57-9c8d-377971eb87d6.16b4cb02fcd1472a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2682/filler-pod-2045f586-b2b5-4c57-9c8d-377971eb87d6 to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-2045f586-b2b5-4c57-9c8d-377971eb87d6.16b4cb0350a7ffc6], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-2045f586-b2b5-4c57-9c8d-377971eb87d6.16b4cb0364f4a5ec], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 340.558812ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-2045f586-b2b5-4c57-9c8d-377971eb87d6.16b4cb036ba82cc9], Reason = [Created], Message = [Created container filler-pod-2045f586-b2b5-4c57-9c8d-377971eb87d6] STEP: Considering event: Type = [Normal], Name = [filler-pod-2045f586-b2b5-4c57-9c8d-377971eb87d6.16b4cb03722cc959], Reason = [Started], Message = [Started container filler-pod-2045f586-b2b5-4c57-9c8d-377971eb87d6] STEP: Considering event: Type = [Normal], Name = [filler-pod-3cec3bfc-9cd2-4cb3-88df-e9398bdb7320.16b4cb02fd4b8089], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2682/filler-pod-3cec3bfc-9cd2-4cb3-88df-e9398bdb7320 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-3cec3bfc-9cd2-4cb3-88df-e9398bdb7320.16b4cb0353097af9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-3cec3bfc-9cd2-4cb3-88df-e9398bdb7320.16b4cb036e85b202], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 461.117663ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-3cec3bfc-9cd2-4cb3-88df-e9398bdb7320.16b4cb0374b03750], Reason = [Created], Message = [Created container filler-pod-3cec3bfc-9cd2-4cb3-88df-e9398bdb7320] STEP: Considering event: Type = [Normal], Name = [filler-pod-3cec3bfc-9cd2-4cb3-88df-e9398bdb7320.16b4cb037ac34d13], Reason = [Started], Message = [Started container filler-pod-3cec3bfc-9cd2-4cb3-88df-e9398bdb7320] STEP: Considering event: Type = [Warning], Name = [additional-pod.16b4cb03ed24ee9b], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: removing the label node off the node node1 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node node2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:43:03.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2682" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:5.195 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":17,"completed":8,"skipped":3649,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:43:03.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:43:03.449: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Nov 5 23:43:03.455: INFO: Number of nodes with available pods: 0 Nov 5 23:43:03.455: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Nov 5 23:43:03.471: INFO: Number of nodes with available pods: 0 Nov 5 23:43:03.471: INFO: Node node2 is running more than one daemon pod Nov 5 23:43:04.475: INFO: Number of nodes with available pods: 0 Nov 5 23:43:04.475: INFO: Node node2 is running more than one daemon pod Nov 5 23:43:05.476: INFO: Number of nodes with available pods: 0 Nov 5 23:43:05.476: INFO: Node node2 is running more than one daemon pod Nov 5 23:43:06.476: INFO: Number of nodes with available pods: 0 Nov 5 23:43:06.476: INFO: Node node2 is running more than one daemon pod Nov 5 23:43:07.476: INFO: Number of nodes with available pods: 1 Nov 5 23:43:07.476: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Nov 5 23:43:07.496: INFO: Number of nodes with available pods: 0 Nov 5 23:43:07.496: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Nov 5 23:43:07.501: INFO: Number of nodes with available pods: 0 Nov 5 23:43:07.501: INFO: Node node2 is running more than one daemon pod Nov 5 23:43:08.505: INFO: Number of nodes with available pods: 0 Nov 5 23:43:08.505: INFO: Node node2 is running more than one daemon pod Nov 5 23:43:09.505: INFO: Number of nodes with available pods: 0 Nov 5 23:43:09.506: INFO: Node node2 is running more than one daemon pod Nov 5 23:43:10.506: INFO: Number of nodes with available pods: 0 Nov 5 23:43:10.506: INFO: Node node2 is running more than one daemon pod Nov 5 23:43:11.505: INFO: Number of nodes with available pods: 0 Nov 5 23:43:11.506: INFO: Node node2 is running more than one daemon pod Nov 5 23:43:12.506: INFO: Number of nodes with available pods: 0 Nov 5 23:43:12.506: INFO: Node node2 is running more than one daemon pod Nov 5 23:43:13.505: INFO: Number of nodes with available pods: 1 Nov 5 23:43:13.505: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1993, will wait for the garbage collector to delete the pods Nov 5 23:43:13.568: INFO: Deleting DaemonSet.extensions daemon-set took: 4.414578ms Nov 5 23:43:13.669: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.882079ms Nov 5 23:43:17.472: INFO: Number of nodes with available pods: 0 Nov 5 23:43:17.472: INFO: Number of running nodes: 0, number of available pods: 0 Nov 5 23:43:17.474: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"56473"},"items":null} Nov 5 23:43:17.477: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"56473"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:43:17.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1993" for this suite. • [SLOW TEST:14.088 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":17,"completed":9,"skipped":3654,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:43:17.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Nov 5 23:43:17.535: INFO: Waiting up to 1m0s for all nodes to be ready Nov 5 23:44:17.603: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Nov 5 23:44:17.630: INFO: Created pod: pod0-sched-preemption-low-priority Nov 5 23:44:17.651: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:44:41.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2039" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:84.229 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":17,"completed":10,"skipped":3964,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:44:41.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 5 23:44:41.759: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 5 23:44:41.767: INFO: Waiting for terminating namespaces to be deleted... Nov 5 23:44:41.770: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 5 23:44:41.777: INFO: cmk-cfm9r from kube-system started at 2021-11-05 21:13:47 +0000 UTC (2 container statuses recorded) Nov 5 23:44:41.777: INFO: Container nodereport ready: true, restart count 0 Nov 5 23:44:41.777: INFO: Container reconcile ready: true, restart count 0 Nov 5 23:44:41.777: INFO: cmk-init-discover-node1-nnkks from kube-system started at 2021-11-05 21:13:04 +0000 UTC (3 container statuses recorded) Nov 5 23:44:41.777: INFO: Container discover ready: false, restart count 0 Nov 5 23:44:41.777: INFO: Container init ready: false, restart count 0 Nov 5 23:44:41.777: INFO: Container install ready: false, restart count 0 Nov 5 23:44:41.777: INFO: cmk-webhook-6c9d5f8578-wq5mk from kube-system started at 2021-11-05 21:13:47 +0000 UTC (1 container statuses recorded) Nov 5 23:44:41.777: INFO: Container cmk-webhook ready: true, restart count 0 Nov 5 23:44:41.777: INFO: kube-flannel-hxwks from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 5 23:44:41.777: INFO: Container kube-flannel ready: true, restart count 3 Nov 5 23:44:41.777: INFO: kube-multus-ds-amd64-mqrl8 from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 5 23:44:41.777: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:44:41.777: INFO: kube-proxy-mc4cs from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 5 23:44:41.777: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:44:41.777: INFO: kubernetes-dashboard-785dcbb76d-9wtdz from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 5 23:44:41.777: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 5 23:44:41.777: INFO: nginx-proxy-node1 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 5 23:44:41.777: INFO: Container nginx-proxy ready: true, restart count 2 Nov 5 23:44:41.777: INFO: node-feature-discovery-worker-spmbf from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 5 23:44:41.777: INFO: Container nfd-worker ready: true, restart count 0 Nov 5 23:44:41.777: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 5 23:44:41.777: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 5 23:44:41.777: INFO: collectd-5k6s9 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 5 23:44:41.777: INFO: Container collectd ready: true, restart count 0 Nov 5 23:44:41.777: INFO: Container collectd-exporter ready: true, restart count 0 Nov 5 23:44:41.777: INFO: Container rbac-proxy ready: true, restart count 0 Nov 5 23:44:41.777: INFO: node-exporter-fvksz from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 5 23:44:41.777: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:44:41.777: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:44:41.777: INFO: prometheus-k8s-0 from monitoring started at 2021-11-05 21:14:58 +0000 UTC (4 container statuses recorded) Nov 5 23:44:41.777: INFO: Container config-reloader ready: true, restart count 0 Nov 5 23:44:41.777: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 5 23:44:41.777: INFO: Container grafana ready: true, restart count 0 Nov 5 23:44:41.777: INFO: Container prometheus ready: true, restart count 1 Nov 5 23:44:41.777: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s from monitoring started at 2021-11-05 21:17:51 +0000 UTC (1 container statuses recorded) Nov 5 23:44:41.778: INFO: Container tas-extender ready: true, restart count 0 Nov 5 23:44:41.778: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 5 23:44:41.794: INFO: cmk-bnvd2 from kube-system started at 2021-11-05 21:13:46 +0000 UTC (2 container statuses recorded) Nov 5 23:44:41.794: INFO: Container nodereport ready: true, restart count 0 Nov 5 23:44:41.794: INFO: Container reconcile ready: true, restart count 0 Nov 5 23:44:41.794: INFO: cmk-init-discover-node2-9svdd from kube-system started at 2021-11-05 21:13:24 +0000 UTC (3 container statuses recorded) Nov 5 23:44:41.794: INFO: Container discover ready: false, restart count 0 Nov 5 23:44:41.794: INFO: Container init ready: false, restart count 0 Nov 5 23:44:41.794: INFO: Container install ready: false, restart count 0 Nov 5 23:44:41.794: INFO: kube-flannel-cqj7j from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 5 23:44:41.794: INFO: Container kube-flannel ready: true, restart count 2 Nov 5 23:44:41.794: INFO: kube-multus-ds-amd64-p7bxx from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 5 23:44:41.794: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:44:41.794: INFO: kube-proxy-j9lmg from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 5 23:44:41.794: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:44:41.794: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 5 23:44:41.794: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 5 23:44:41.794: INFO: nginx-proxy-node2 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 5 23:44:41.794: INFO: Container nginx-proxy ready: true, restart count 2 Nov 5 23:44:41.794: INFO: node-feature-discovery-worker-pn6cr from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 5 23:44:41.794: INFO: Container nfd-worker ready: true, restart count 0 Nov 5 23:44:41.794: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 5 23:44:41.794: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 5 23:44:41.794: INFO: collectd-r2g57 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 5 23:44:41.794: INFO: Container collectd ready: true, restart count 0 Nov 5 23:44:41.794: INFO: Container collectd-exporter ready: true, restart count 0 Nov 5 23:44:41.794: INFO: Container rbac-proxy ready: true, restart count 0 Nov 5 23:44:41.794: INFO: node-exporter-k7p79 from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 5 23:44:41.794: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:44:41.794: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:44:41.794: INFO: prometheus-operator-585ccfb458-vh55q from monitoring started at 2021-11-05 21:14:41 +0000 UTC (2 container statuses recorded) Nov 5 23:44:41.795: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:44:41.795: INFO: Container prometheus-operator ready: true, restart count 0 Nov 5 23:44:41.795: INFO: pod1-sched-preemption-medium-priority from sched-preemption-2039 started at 2021-11-05 23:44:25 +0000 UTC (1 container statuses recorded) Nov 5 23:44:41.795: INFO: Container pod1-sched-preemption-medium-priority ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d554bea8-53a2-4034-8180-2defbf667563 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-d554bea8-53a2-4034-8180-2defbf667563 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-d554bea8-53a2-4034-8180-2defbf667563 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:44:49.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4060" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.149 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":17,"completed":11,"skipped":3985,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:44:49.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 5 23:44:49.914: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 5 23:44:49.922: INFO: Waiting for terminating namespaces to be deleted... Nov 5 23:44:49.926: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 5 23:44:49.936: INFO: cmk-cfm9r from kube-system started at 2021-11-05 21:13:47 +0000 UTC (2 container statuses recorded) Nov 5 23:44:49.936: INFO: Container nodereport ready: true, restart count 0 Nov 5 23:44:49.936: INFO: Container reconcile ready: true, restart count 0 Nov 5 23:44:49.936: INFO: cmk-init-discover-node1-nnkks from kube-system started at 2021-11-05 21:13:04 +0000 UTC (3 container statuses recorded) Nov 5 23:44:49.936: INFO: Container discover ready: false, restart count 0 Nov 5 23:44:49.936: INFO: Container init ready: false, restart count 0 Nov 5 23:44:49.936: INFO: Container install ready: false, restart count 0 Nov 5 23:44:49.936: INFO: cmk-webhook-6c9d5f8578-wq5mk from kube-system started at 2021-11-05 21:13:47 +0000 UTC (1 container statuses recorded) Nov 5 23:44:49.936: INFO: Container cmk-webhook ready: true, restart count 0 Nov 5 23:44:49.936: INFO: kube-flannel-hxwks from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 5 23:44:49.936: INFO: Container kube-flannel ready: true, restart count 3 Nov 5 23:44:49.936: INFO: kube-multus-ds-amd64-mqrl8 from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 5 23:44:49.936: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:44:49.936: INFO: kube-proxy-mc4cs from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 5 23:44:49.936: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:44:49.936: INFO: kubernetes-dashboard-785dcbb76d-9wtdz from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 5 23:44:49.936: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 5 23:44:49.936: INFO: nginx-proxy-node1 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 5 23:44:49.936: INFO: Container nginx-proxy ready: true, restart count 2 Nov 5 23:44:49.936: INFO: node-feature-discovery-worker-spmbf from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 5 23:44:49.936: INFO: Container nfd-worker ready: true, restart count 0 Nov 5 23:44:49.936: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 5 23:44:49.936: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 5 23:44:49.936: INFO: collectd-5k6s9 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 5 23:44:49.936: INFO: Container collectd ready: true, restart count 0 Nov 5 23:44:49.936: INFO: Container collectd-exporter ready: true, restart count 0 Nov 5 23:44:49.936: INFO: Container rbac-proxy ready: true, restart count 0 Nov 5 23:44:49.936: INFO: node-exporter-fvksz from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 5 23:44:49.936: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:44:49.936: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:44:49.936: INFO: prometheus-k8s-0 from monitoring started at 2021-11-05 21:14:58 +0000 UTC (4 container statuses recorded) Nov 5 23:44:49.936: INFO: Container config-reloader ready: true, restart count 0 Nov 5 23:44:49.936: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 5 23:44:49.937: INFO: Container grafana ready: true, restart count 0 Nov 5 23:44:49.937: INFO: Container prometheus ready: true, restart count 1 Nov 5 23:44:49.937: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s from monitoring started at 2021-11-05 21:17:51 +0000 UTC (1 container statuses recorded) Nov 5 23:44:49.937: INFO: Container tas-extender ready: true, restart count 0 Nov 5 23:44:49.937: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 5 23:44:49.944: INFO: cmk-bnvd2 from kube-system started at 2021-11-05 21:13:46 +0000 UTC (2 container statuses recorded) Nov 5 23:44:49.944: INFO: Container nodereport ready: true, restart count 0 Nov 5 23:44:49.944: INFO: Container reconcile ready: true, restart count 0 Nov 5 23:44:49.944: INFO: cmk-init-discover-node2-9svdd from kube-system started at 2021-11-05 21:13:24 +0000 UTC (3 container statuses recorded) Nov 5 23:44:49.944: INFO: Container discover ready: false, restart count 0 Nov 5 23:44:49.944: INFO: Container init ready: false, restart count 0 Nov 5 23:44:49.944: INFO: Container install ready: false, restart count 0 Nov 5 23:44:49.944: INFO: kube-flannel-cqj7j from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 5 23:44:49.944: INFO: Container kube-flannel ready: true, restart count 2 Nov 5 23:44:49.944: INFO: kube-multus-ds-amd64-p7bxx from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 5 23:44:49.944: INFO: Container kube-multus ready: true, restart count 1 Nov 5 23:44:49.944: INFO: kube-proxy-j9lmg from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 5 23:44:49.944: INFO: Container kube-proxy ready: true, restart count 2 Nov 5 23:44:49.944: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 5 23:44:49.944: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 5 23:44:49.944: INFO: nginx-proxy-node2 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 5 23:44:49.944: INFO: Container nginx-proxy ready: true, restart count 2 Nov 5 23:44:49.944: INFO: node-feature-discovery-worker-pn6cr from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 5 23:44:49.944: INFO: Container nfd-worker ready: true, restart count 0 Nov 5 23:44:49.944: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 5 23:44:49.944: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 5 23:44:49.944: INFO: collectd-r2g57 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 5 23:44:49.944: INFO: Container collectd ready: true, restart count 0 Nov 5 23:44:49.944: INFO: Container collectd-exporter ready: true, restart count 0 Nov 5 23:44:49.944: INFO: Container rbac-proxy ready: true, restart count 0 Nov 5 23:44:49.944: INFO: node-exporter-k7p79 from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 5 23:44:49.944: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:44:49.944: INFO: Container node-exporter ready: true, restart count 0 Nov 5 23:44:49.944: INFO: prometheus-operator-585ccfb458-vh55q from monitoring started at 2021-11-05 21:14:41 +0000 UTC (2 container statuses recorded) Nov 5 23:44:49.944: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 5 23:44:49.944: INFO: Container prometheus-operator ready: true, restart count 0 Nov 5 23:44:49.944: INFO: with-labels from sched-pred-4060 started at 2021-11-05 23:44:45 +0000 UTC (1 container statuses recorded) Nov 5 23:44:49.944: INFO: Container with-labels ready: true, restart count 0 Nov 5 23:44:49.944: INFO: pod1-sched-preemption-medium-priority from sched-preemption-2039 started at 2021-11-05 23:44:25 +0000 UTC (1 container statuses recorded) Nov 5 23:44:49.944: INFO: Container pod1-sched-preemption-medium-priority ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4f9251f3-fdc2-45e6-995d-78cdfa1f9cb6 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.10.190.207 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-4f9251f3-fdc2-45e6-995d-78cdfa1f9cb6 off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-4f9251f3-fdc2-45e6-995d-78cdfa1f9cb6 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:49:58.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6783" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.162 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":17,"completed":12,"skipped":4524,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:49:58.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Nov 5 23:49:58.113: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:49:58.113: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:49:58.113: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:49:58.115: INFO: Number of nodes with available pods: 0 Nov 5 23:49:58.115: INFO: Node node1 is running more than one daemon pod Nov 5 23:49:59.121: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:49:59.121: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:49:59.121: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:49:59.124: INFO: Number of nodes with available pods: 0 Nov 5 23:49:59.124: INFO: Node node1 is running more than one daemon pod Nov 5 23:50:00.122: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:00.122: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:00.122: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:00.125: INFO: Number of nodes with available pods: 0 Nov 5 23:50:00.125: INFO: Node node1 is running more than one daemon pod Nov 5 23:50:01.120: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:01.121: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:01.121: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:01.123: INFO: Number of nodes with available pods: 1 Nov 5 23:50:01.123: INFO: Node node2 is running more than one daemon pod Nov 5 23:50:02.121: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:02.121: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:02.121: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:02.124: INFO: Number of nodes with available pods: 2 Nov 5 23:50:02.124: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Nov 5 23:50:02.139: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:02.139: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:02.139: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:02.142: INFO: Number of nodes with available pods: 1 Nov 5 23:50:02.142: INFO: Node node1 is running more than one daemon pod Nov 5 23:50:03.148: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:03.148: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:03.148: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:03.150: INFO: Number of nodes with available pods: 1 Nov 5 23:50:03.150: INFO: Node node1 is running more than one daemon pod Nov 5 23:50:04.148: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:04.148: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:04.148: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:04.151: INFO: Number of nodes with available pods: 1 Nov 5 23:50:04.151: INFO: Node node1 is running more than one daemon pod Nov 5 23:50:05.149: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:05.149: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:05.149: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:05.151: INFO: Number of nodes with available pods: 1 Nov 5 23:50:05.151: INFO: Node node1 is running more than one daemon pod Nov 5 23:50:06.148: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:06.148: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:06.148: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:06.150: INFO: Number of nodes with available pods: 1 Nov 5 23:50:06.150: INFO: Node node1 is running more than one daemon pod Nov 5 23:50:07.150: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:07.150: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:07.150: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:07.153: INFO: Number of nodes with available pods: 1 Nov 5 23:50:07.153: INFO: Node node1 is running more than one daemon pod Nov 5 23:50:08.151: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:08.151: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:08.151: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 5 23:50:08.154: INFO: Number of nodes with available pods: 2 Nov 5 23:50:08.154: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5924, will wait for the garbage collector to delete the pods Nov 5 23:50:08.216: INFO: Deleting DaemonSet.extensions daemon-set took: 4.977445ms Nov 5 23:50:08.316: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.110258ms Nov 5 23:50:18.819: INFO: Number of nodes with available pods: 0 Nov 5 23:50:18.819: INFO: Number of running nodes: 0, number of available pods: 0 Nov 5 23:50:18.821: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"57823"},"items":null} Nov 5 23:50:18.824: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"57823"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:50:18.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5924" for this suite. • [SLOW TEST:20.787 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":17,"completed":13,"skipped":4696,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:50:18.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Nov 5 23:50:19.153: INFO: Pod name wrapped-volume-race-b20061fe-ab1f-48d6-b5d9-e16209724185: Found 3 pods out of 5 Nov 5 23:50:24.162: INFO: Pod name wrapped-volume-race-b20061fe-ab1f-48d6-b5d9-e16209724185: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b20061fe-ab1f-48d6-b5d9-e16209724185 in namespace emptydir-wrapper-6468, will wait for the garbage collector to delete the pods Nov 5 23:50:40.247: INFO: Deleting ReplicationController wrapped-volume-race-b20061fe-ab1f-48d6-b5d9-e16209724185 took: 6.404653ms Nov 5 23:50:40.347: INFO: Terminating ReplicationController wrapped-volume-race-b20061fe-ab1f-48d6-b5d9-e16209724185 pods took: 100.153159ms STEP: Creating RC which spawns configmap-volume pods Nov 5 23:50:48.863: INFO: Pod name wrapped-volume-race-a4ee1fef-25b3-4123-9967-b1d1bf72c154: Found 0 pods out of 5 Nov 5 23:50:53.873: INFO: Pod name wrapped-volume-race-a4ee1fef-25b3-4123-9967-b1d1bf72c154: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a4ee1fef-25b3-4123-9967-b1d1bf72c154 in namespace emptydir-wrapper-6468, will wait for the garbage collector to delete the pods Nov 5 23:51:09.952: INFO: Deleting ReplicationController wrapped-volume-race-a4ee1fef-25b3-4123-9967-b1d1bf72c154 took: 4.884863ms Nov 5 23:51:10.052: INFO: Terminating ReplicationController wrapped-volume-race-a4ee1fef-25b3-4123-9967-b1d1bf72c154 pods took: 100.309535ms STEP: Creating RC which spawns configmap-volume pods Nov 5 23:51:18.869: INFO: Pod name wrapped-volume-race-5b6855e3-2255-42ee-964e-f8609a498e2b: Found 0 pods out of 5 Nov 5 23:51:23.878: INFO: Pod name wrapped-volume-race-5b6855e3-2255-42ee-964e-f8609a498e2b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5b6855e3-2255-42ee-964e-f8609a498e2b in namespace emptydir-wrapper-6468, will wait for the garbage collector to delete the pods Nov 5 23:51:39.966: INFO: Deleting ReplicationController wrapped-volume-race-5b6855e3-2255-42ee-964e-f8609a498e2b took: 6.030264ms Nov 5 23:51:40.067: INFO: Terminating ReplicationController wrapped-volume-race-5b6855e3-2255-42ee-964e-f8609a498e2b pods took: 100.729238ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:51:49.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6468" for this suite. • [SLOW TEST:90.198 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":17,"completed":14,"skipped":4706,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:51:49.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Nov 5 23:51:49.081: INFO: Waiting up to 1m0s for all nodes to be ready Nov 5 23:52:49.143: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Nov 5 23:52:49.169: INFO: Created pod: pod0-sched-preemption-low-priority Nov 5 23:52:49.191: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:53:03.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1623" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:74.223 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":17,"completed":15,"skipped":5125,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:53:03.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Nov 5 23:53:03.314: INFO: Waiting up to 1m0s for all nodes to be ready Nov 5 23:54:03.377: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:54:03.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Nov 5 23:54:07.453: INFO: found a healthy node: node2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Nov 5 23:54:23.516: INFO: pods created so far: [1 1 1] Nov 5 23:54:23.516: INFO: length of pods created so far: 3 Nov 5 23:54:27.531: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:54:34.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-865" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:54:34.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5636" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:91.332 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":17,"completed":16,"skipped":5389,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 5 23:54:34.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 5 23:54:34.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-449" for this suite. STEP: Destroying namespace "nspatchtest-19cd104e-dcd9-4e82-aaee-26e9d34af429-3437" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":17,"completed":17,"skipped":5620,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSNov 5 23:54:34.677: INFO: Running AfterSuite actions on all nodes Nov 5 23:54:34.677: INFO: Running AfterSuite actions on node 1 Nov 5 23:54:34.677: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":17,"skipped":5753,"failed":0} Ran 17 of 5770 Specs in 868.780 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5753 Skipped PASS Ginkgo ran 1 suite in 14m30.131248762s Test Suite Passed