I0520 22:16:33.634534 23 e2e.go:129] Starting e2e run "eb77c560-4e6a-4e8c-a3dc-9ae03ac9b924" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1653084992 - Will randomize all specs Will run 17 of 5773 specs May 20 22:16:33.693: INFO: >>> kubeConfig: /root/.kube/config May 20 22:16:33.698: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 20 22:16:33.727: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 20 22:16:33.796: INFO: The status of Pod cmk-init-discover-node1-vkzkd is Succeeded, skipping waiting May 20 22:16:33.796: INFO: The status of Pod cmk-init-discover-node2-b7gw4 is Succeeded, skipping waiting May 20 22:16:33.796: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 20 22:16:33.796: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 20 22:16:33.796: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 20 22:16:33.814: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 20 22:16:33.814: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 20 22:16:33.814: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 20 22:16:33.814: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 20 22:16:33.814: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 20 22:16:33.814: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 20 22:16:33.814: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 20 22:16:33.814: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 20 22:16:33.814: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 20 22:16:33.814: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 20 22:16:33.814: INFO: e2e test version: v1.21.9 May 20 22:16:33.819: INFO: kube-apiserver version: v1.21.1 May 20 22:16:33.819: INFO: >>> kubeConfig: /root/.kube/config May 20 22:16:33.826: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:16:33.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption W0520 22:16:33.853927 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 22:16:33.854: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 22:16:33.857: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 20 22:16:33.868: INFO: Waiting up to 1m0s for all nodes to be ready May 20 22:17:33.930: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:17:33.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:17:33.968: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. May 20 22:17:33.975: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:17:34.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-5048" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:17:34.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-265" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.228 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":17,"completed":1,"skipped":150,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:17:34.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:18:05.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6494" for this suite. STEP: Destroying namespace "nsdeletetest-9916" for this suite. May 20 22:18:05.210: INFO: Namespace nsdeletetest-9916 was already deleted STEP: Destroying namespace "nsdeletetest-79" for this suite. • [SLOW TEST:31.155 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":17,"completed":2,"skipped":346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:18:05.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:18:05.258: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 20 22:18:05.266: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:05.267: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:05.267: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:05.269: INFO: Number of nodes with available pods: 0 May 20 22:18:05.269: INFO: Node node1 is running more than one daemon pod May 20 22:18:06.275: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:06.275: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:06.275: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:06.279: INFO: Number of nodes with available pods: 0 May 20 22:18:06.279: INFO: Node node1 is running more than one daemon pod May 20 22:18:07.274: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:07.274: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:07.274: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:07.277: INFO: Number of nodes with available pods: 0 May 20 22:18:07.277: INFO: Node node1 is running more than one daemon pod May 20 22:18:08.275: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:08.275: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:08.275: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:08.278: INFO: Number of nodes with available pods: 2 May 20 22:18:08.278: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 20 22:18:08.307: INFO: Wrong image for pod: daemon-set-ttmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 20 22:18:08.312: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:08.312: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:08.312: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:09.316: INFO: Wrong image for pod: daemon-set-ttmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 20 22:18:09.320: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:09.320: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:09.320: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:10.322: INFO: Wrong image for pod: daemon-set-ttmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 20 22:18:10.330: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:10.331: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:10.331: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:11.316: INFO: Wrong image for pod: daemon-set-ttmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 20 22:18:11.320: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:11.320: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:11.320: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:12.317: INFO: Wrong image for pod: daemon-set-ttmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 20 22:18:12.322: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:12.323: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:12.323: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:13.316: INFO: Wrong image for pod: daemon-set-ttmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 20 22:18:13.321: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:13.321: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:13.321: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:14.316: INFO: Wrong image for pod: daemon-set-ttmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 20 22:18:14.321: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:14.321: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:14.321: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:15.319: INFO: Wrong image for pod: daemon-set-ttmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 20 22:18:15.324: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:15.324: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:15.324: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:16.318: INFO: Pod daemon-set-l82dl is not available May 20 22:18:16.318: INFO: Wrong image for pod: daemon-set-ttmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 20 22:18:16.323: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:16.323: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:16.323: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:17.316: INFO: Pod daemon-set-l82dl is not available May 20 22:18:17.316: INFO: Wrong image for pod: daemon-set-ttmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 20 22:18:17.320: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:17.320: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:17.320: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:18.316: INFO: Pod daemon-set-l82dl is not available May 20 22:18:18.316: INFO: Wrong image for pod: daemon-set-ttmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 20 22:18:18.320: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:18.321: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:18.321: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:19.321: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:19.321: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:19.321: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:20.320: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:20.320: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:20.320: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:21.322: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:21.322: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:21.322: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:22.317: INFO: Pod daemon-set-2n6rb is not available May 20 22:18:22.321: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:22.321: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:22.321: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 20 22:18:22.327: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:22.327: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:22.327: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:22.329: INFO: Number of nodes with available pods: 1 May 20 22:18:22.329: INFO: Node node2 is running more than one daemon pod May 20 22:18:23.337: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:23.337: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:23.337: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:23.340: INFO: Number of nodes with available pods: 1 May 20 22:18:23.340: INFO: Node node2 is running more than one daemon pod May 20 22:18:24.336: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:24.336: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:24.336: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:18:24.339: INFO: Number of nodes with available pods: 2 May 20 22:18:24.339: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9282, will wait for the garbage collector to delete the pods May 20 22:18:24.412: INFO: Deleting DaemonSet.extensions daemon-set took: 5.82821ms May 20 22:18:24.512: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.138467ms May 20 22:18:36.915: INFO: Number of nodes with available pods: 0 May 20 22:18:36.915: INFO: Number of running nodes: 0, number of available pods: 0 May 20 22:18:36.918: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"51905"},"items":null} May 20 22:18:36.920: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"51905"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:18:36.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9282" for this suite. • [SLOW TEST:31.719 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":17,"completed":3,"skipped":738,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:18:36.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:18:43.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5126" for this suite. STEP: Destroying namespace "nsdeletetest-4526" for this suite. May 20 22:18:43.037: INFO: Namespace nsdeletetest-4526 was already deleted STEP: Destroying namespace "nsdeletetest-3060" for this suite. • [SLOW TEST:6.099 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":17,"completed":4,"skipped":867,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:18:43.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 22:18:43.081: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 22:18:43.089: INFO: Waiting for terminating namespaces to be deleted... May 20 22:18:43.092: INFO: Logging pods the apiserver thinks is on node node1 before test May 20 22:18:43.103: INFO: cmk-c5x47 from kube-system started at 2022-05-20 20:16:15 +0000 UTC (2 container statuses recorded) May 20 22:18:43.103: INFO: Container nodereport ready: true, restart count 0 May 20 22:18:43.103: INFO: Container reconcile ready: true, restart count 0 May 20 22:18:43.103: INFO: cmk-init-discover-node1-vkzkd from kube-system started at 2022-05-20 20:15:33 +0000 UTC (3 container statuses recorded) May 20 22:18:43.103: INFO: Container discover ready: false, restart count 0 May 20 22:18:43.103: INFO: Container init ready: false, restart count 0 May 20 22:18:43.104: INFO: Container install ready: false, restart count 0 May 20 22:18:43.104: INFO: kube-flannel-2blt7 from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 20 22:18:43.104: INFO: Container kube-flannel ready: true, restart count 3 May 20 22:18:43.104: INFO: kube-multus-ds-amd64-krd6m from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 20 22:18:43.104: INFO: Container kube-multus ready: true, restart count 1 May 20 22:18:43.104: INFO: kube-proxy-v8kzq from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 20 22:18:43.104: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:18:43.104: INFO: kubernetes-dashboard-785dcbb76d-6c2f8 from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 20 22:18:43.104: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 20 22:18:43.104: INFO: nginx-proxy-node1 from kube-system started at 2022-05-20 20:06:57 +0000 UTC (1 container statuses recorded) May 20 22:18:43.104: INFO: Container nginx-proxy ready: true, restart count 2 May 20 22:18:43.104: INFO: node-feature-discovery-worker-rh55h from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 20 22:18:43.104: INFO: Container nfd-worker ready: true, restart count 0 May 20 22:18:43.104: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 20 22:18:43.104: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 22:18:43.104: INFO: collectd-875j8 from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 20 22:18:43.104: INFO: Container collectd ready: true, restart count 0 May 20 22:18:43.104: INFO: Container collectd-exporter ready: true, restart count 0 May 20 22:18:43.104: INFO: Container rbac-proxy ready: true, restart count 0 May 20 22:18:43.104: INFO: node-exporter-czwvh from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 20 22:18:43.104: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:18:43.104: INFO: Container node-exporter ready: true, restart count 0 May 20 22:18:43.104: INFO: prometheus-k8s-0 from monitoring started at 2022-05-20 20:17:30 +0000 UTC (4 container statuses recorded) May 20 22:18:43.104: INFO: Container config-reloader ready: true, restart count 0 May 20 22:18:43.104: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 20 22:18:43.104: INFO: Container grafana ready: true, restart count 0 May 20 22:18:43.104: INFO: Container prometheus ready: true, restart count 1 May 20 22:18:43.104: INFO: Logging pods the apiserver thinks is on node node2 before test May 20 22:18:43.113: INFO: cmk-9hxtl from kube-system started at 2022-05-20 20:16:16 +0000 UTC (2 container statuses recorded) May 20 22:18:43.113: INFO: Container nodereport ready: true, restart count 0 May 20 22:18:43.113: INFO: Container reconcile ready: true, restart count 0 May 20 22:18:43.113: INFO: cmk-init-discover-node2-b7gw4 from kube-system started at 2022-05-20 20:15:53 +0000 UTC (3 container statuses recorded) May 20 22:18:43.113: INFO: Container discover ready: false, restart count 0 May 20 22:18:43.113: INFO: Container init ready: false, restart count 0 May 20 22:18:43.113: INFO: Container install ready: false, restart count 0 May 20 22:18:43.113: INFO: cmk-webhook-6c9d5f8578-5kbbc from kube-system started at 2022-05-20 20:16:16 +0000 UTC (1 container statuses recorded) May 20 22:18:43.113: INFO: Container cmk-webhook ready: true, restart count 0 May 20 22:18:43.113: INFO: kube-flannel-jpmpd from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 20 22:18:43.113: INFO: Container kube-flannel ready: true, restart count 2 May 20 22:18:43.113: INFO: kube-multus-ds-amd64-p22zp from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 20 22:18:43.113: INFO: Container kube-multus ready: true, restart count 1 May 20 22:18:43.113: INFO: kube-proxy-rg2fp from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 20 22:18:43.113: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:18:43.113: INFO: kubernetes-metrics-scraper-5558854cb-66r9g from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 20 22:18:43.113: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 20 22:18:43.113: INFO: nginx-proxy-node2 from kube-system started at 2022-05-20 20:03:09 +0000 UTC (1 container statuses recorded) May 20 22:18:43.113: INFO: Container nginx-proxy ready: true, restart count 2 May 20 22:18:43.113: INFO: node-feature-discovery-worker-nphk9 from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 20 22:18:43.113: INFO: Container nfd-worker ready: true, restart count 0 May 20 22:18:43.113: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 20 22:18:43.113: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 22:18:43.113: INFO: collectd-h4pzk from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 20 22:18:43.113: INFO: Container collectd ready: true, restart count 0 May 20 22:18:43.113: INFO: Container collectd-exporter ready: true, restart count 0 May 20 22:18:43.113: INFO: Container rbac-proxy ready: true, restart count 0 May 20 22:18:43.113: INFO: node-exporter-vm24n from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 20 22:18:43.113: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:18:43.113: INFO: Container node-exporter ready: true, restart count 0 May 20 22:18:43.113: INFO: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd from monitoring started at 2022-05-20 20:20:26 +0000 UTC (1 container statuses recorded) May 20 22:18:43.113: INFO: Container tas-extender ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-16e59770-8501-4a8f-8f57-749ff4bcd449 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.10.190.208 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-16e59770-8501-4a8f-8f57-749ff4bcd449 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-16e59770-8501-4a8f-8f57-749ff4bcd449 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:23:51.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7512" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.172 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":17,"completed":5,"skipped":1970,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:23:51.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 22:23:51.254: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 22:23:51.262: INFO: Waiting for terminating namespaces to be deleted... May 20 22:23:51.265: INFO: Logging pods the apiserver thinks is on node node1 before test May 20 22:23:51.279: INFO: cmk-c5x47 from kube-system started at 2022-05-20 20:16:15 +0000 UTC (2 container statuses recorded) May 20 22:23:51.279: INFO: Container nodereport ready: true, restart count 0 May 20 22:23:51.279: INFO: Container reconcile ready: true, restart count 0 May 20 22:23:51.279: INFO: cmk-init-discover-node1-vkzkd from kube-system started at 2022-05-20 20:15:33 +0000 UTC (3 container statuses recorded) May 20 22:23:51.279: INFO: Container discover ready: false, restart count 0 May 20 22:23:51.279: INFO: Container init ready: false, restart count 0 May 20 22:23:51.279: INFO: Container install ready: false, restart count 0 May 20 22:23:51.279: INFO: kube-flannel-2blt7 from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 20 22:23:51.280: INFO: Container kube-flannel ready: true, restart count 3 May 20 22:23:51.280: INFO: kube-multus-ds-amd64-krd6m from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 20 22:23:51.280: INFO: Container kube-multus ready: true, restart count 1 May 20 22:23:51.280: INFO: kube-proxy-v8kzq from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 20 22:23:51.280: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:23:51.280: INFO: kubernetes-dashboard-785dcbb76d-6c2f8 from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 20 22:23:51.280: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 20 22:23:51.280: INFO: nginx-proxy-node1 from kube-system started at 2022-05-20 20:06:57 +0000 UTC (1 container statuses recorded) May 20 22:23:51.280: INFO: Container nginx-proxy ready: true, restart count 2 May 20 22:23:51.280: INFO: node-feature-discovery-worker-rh55h from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 20 22:23:51.280: INFO: Container nfd-worker ready: true, restart count 0 May 20 22:23:51.280: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 20 22:23:51.280: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 22:23:51.280: INFO: collectd-875j8 from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 20 22:23:51.280: INFO: Container collectd ready: true, restart count 0 May 20 22:23:51.280: INFO: Container collectd-exporter ready: true, restart count 0 May 20 22:23:51.280: INFO: Container rbac-proxy ready: true, restart count 0 May 20 22:23:51.280: INFO: node-exporter-czwvh from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 20 22:23:51.280: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:23:51.280: INFO: Container node-exporter ready: true, restart count 0 May 20 22:23:51.280: INFO: prometheus-k8s-0 from monitoring started at 2022-05-20 20:17:30 +0000 UTC (4 container statuses recorded) May 20 22:23:51.280: INFO: Container config-reloader ready: true, restart count 0 May 20 22:23:51.280: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 20 22:23:51.280: INFO: Container grafana ready: true, restart count 0 May 20 22:23:51.280: INFO: Container prometheus ready: true, restart count 1 May 20 22:23:51.280: INFO: Logging pods the apiserver thinks is on node node2 before test May 20 22:23:51.290: INFO: cmk-9hxtl from kube-system started at 2022-05-20 20:16:16 +0000 UTC (2 container statuses recorded) May 20 22:23:51.290: INFO: Container nodereport ready: true, restart count 0 May 20 22:23:51.290: INFO: Container reconcile ready: true, restart count 0 May 20 22:23:51.290: INFO: cmk-init-discover-node2-b7gw4 from kube-system started at 2022-05-20 20:15:53 +0000 UTC (3 container statuses recorded) May 20 22:23:51.290: INFO: Container discover ready: false, restart count 0 May 20 22:23:51.290: INFO: Container init ready: false, restart count 0 May 20 22:23:51.290: INFO: Container install ready: false, restart count 0 May 20 22:23:51.290: INFO: cmk-webhook-6c9d5f8578-5kbbc from kube-system started at 2022-05-20 20:16:16 +0000 UTC (1 container statuses recorded) May 20 22:23:51.290: INFO: Container cmk-webhook ready: true, restart count 0 May 20 22:23:51.290: INFO: kube-flannel-jpmpd from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 20 22:23:51.290: INFO: Container kube-flannel ready: true, restart count 2 May 20 22:23:51.290: INFO: kube-multus-ds-amd64-p22zp from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 20 22:23:51.290: INFO: Container kube-multus ready: true, restart count 1 May 20 22:23:51.290: INFO: kube-proxy-rg2fp from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 20 22:23:51.290: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:23:51.290: INFO: kubernetes-metrics-scraper-5558854cb-66r9g from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 20 22:23:51.290: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 20 22:23:51.290: INFO: nginx-proxy-node2 from kube-system started at 2022-05-20 20:03:09 +0000 UTC (1 container statuses recorded) May 20 22:23:51.290: INFO: Container nginx-proxy ready: true, restart count 2 May 20 22:23:51.290: INFO: node-feature-discovery-worker-nphk9 from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 20 22:23:51.290: INFO: Container nfd-worker ready: true, restart count 0 May 20 22:23:51.290: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 20 22:23:51.290: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 22:23:51.290: INFO: collectd-h4pzk from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 20 22:23:51.290: INFO: Container collectd ready: true, restart count 0 May 20 22:23:51.290: INFO: Container collectd-exporter ready: true, restart count 0 May 20 22:23:51.290: INFO: Container rbac-proxy ready: true, restart count 0 May 20 22:23:51.290: INFO: node-exporter-vm24n from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 20 22:23:51.290: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:23:51.290: INFO: Container node-exporter ready: true, restart count 0 May 20 22:23:51.290: INFO: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd from monitoring started at 2022-05-20 20:20:26 +0000 UTC (1 container statuses recorded) May 20 22:23:51.290: INFO: Container tas-extender ready: true, restart count 0 May 20 22:23:51.290: INFO: pod4 from sched-pred-7512 started at 2022-05-20 22:18:47 +0000 UTC (1 container statuses recorded) May 20 22:23:51.290: INFO: Container agnhost ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: verifying the node has the label node node1 STEP: verifying the node has the label node node2 May 20 22:23:57.430: INFO: Pod cmk-9hxtl requesting resource cpu=0m on Node node2 May 20 22:23:57.430: INFO: Pod cmk-c5x47 requesting resource cpu=0m on Node node1 May 20 22:23:57.430: INFO: Pod cmk-webhook-6c9d5f8578-5kbbc requesting resource cpu=0m on Node node2 May 20 22:23:57.430: INFO: Pod kube-flannel-2blt7 requesting resource cpu=150m on Node node1 May 20 22:23:57.430: INFO: Pod kube-flannel-jpmpd requesting resource cpu=150m on Node node2 May 20 22:23:57.430: INFO: Pod kube-multus-ds-amd64-krd6m requesting resource cpu=100m on Node node1 May 20 22:23:57.430: INFO: Pod kube-multus-ds-amd64-p22zp requesting resource cpu=100m on Node node2 May 20 22:23:57.430: INFO: Pod kube-proxy-rg2fp requesting resource cpu=0m on Node node2 May 20 22:23:57.430: INFO: Pod kube-proxy-v8kzq requesting resource cpu=0m on Node node1 May 20 22:23:57.430: INFO: Pod kubernetes-dashboard-785dcbb76d-6c2f8 requesting resource cpu=50m on Node node1 May 20 22:23:57.430: INFO: Pod kubernetes-metrics-scraper-5558854cb-66r9g requesting resource cpu=0m on Node node2 May 20 22:23:57.430: INFO: Pod nginx-proxy-node1 requesting resource cpu=25m on Node node1 May 20 22:23:57.430: INFO: Pod nginx-proxy-node2 requesting resource cpu=25m on Node node2 May 20 22:23:57.430: INFO: Pod node-feature-discovery-worker-nphk9 requesting resource cpu=0m on Node node2 May 20 22:23:57.430: INFO: Pod node-feature-discovery-worker-rh55h requesting resource cpu=0m on Node node1 May 20 22:23:57.430: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl requesting resource cpu=0m on Node node1 May 20 22:23:57.430: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk requesting resource cpu=0m on Node node2 May 20 22:23:57.430: INFO: Pod collectd-875j8 requesting resource cpu=0m on Node node1 May 20 22:23:57.430: INFO: Pod collectd-h4pzk requesting resource cpu=0m on Node node2 May 20 22:23:57.430: INFO: Pod node-exporter-czwvh requesting resource cpu=112m on Node node1 May 20 22:23:57.430: INFO: Pod node-exporter-vm24n requesting resource cpu=112m on Node node2 May 20 22:23:57.430: INFO: Pod prometheus-k8s-0 requesting resource cpu=200m on Node node1 May 20 22:23:57.430: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-ddzzd requesting resource cpu=0m on Node node2 May 20 22:23:57.430: INFO: Pod pod4 requesting resource cpu=0m on Node node2 STEP: Starting Pods to consume most of the cluster CPU. May 20 22:23:57.430: INFO: Creating a pod which consumes cpu=53454m on Node node1 May 20 22:23:57.441: INFO: Creating a pod which consumes cpu=53629m on Node node2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-146a39d2-e0e2-4847-822f-e1a6c0574b8f.16f0f0726abc21b9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3828/filler-pod-146a39d2-e0e2-4847-822f-e1a6c0574b8f to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-146a39d2-e0e2-4847-822f-e1a6c0574b8f.16f0f072c0b5edcd], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-146a39d2-e0e2-4847-822f-e1a6c0574b8f.16f0f072d7dffb03], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 388.624569ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-146a39d2-e0e2-4847-822f-e1a6c0574b8f.16f0f072de75a58f], Reason = [Created], Message = [Created container filler-pod-146a39d2-e0e2-4847-822f-e1a6c0574b8f] STEP: Considering event: Type = [Normal], Name = [filler-pod-146a39d2-e0e2-4847-822f-e1a6c0574b8f.16f0f072e515ca7b], Reason = [Started], Message = [Started container filler-pod-146a39d2-e0e2-4847-822f-e1a6c0574b8f] STEP: Considering event: Type = [Normal], Name = [filler-pod-1d9edd31-c4ea-4749-9d63-ca2ac288bff8.16f0f0726a389831], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3828/filler-pod-1d9edd31-c4ea-4749-9d63-ca2ac288bff8 to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-1d9edd31-c4ea-4749-9d63-ca2ac288bff8.16f0f072c348b4b0], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-1d9edd31-c4ea-4749-9d63-ca2ac288bff8.16f0f072e0cc640d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 495.16165ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-1d9edd31-c4ea-4749-9d63-ca2ac288bff8.16f0f072e823f2fa], Reason = [Created], Message = [Created container filler-pod-1d9edd31-c4ea-4749-9d63-ca2ac288bff8] STEP: Considering event: Type = [Normal], Name = [filler-pod-1d9edd31-c4ea-4749-9d63-ca2ac288bff8.16f0f072ef33eaf4], Reason = [Started], Message = [Started container filler-pod-1d9edd31-c4ea-4749-9d63-ca2ac288bff8] STEP: Considering event: Type = [Warning], Name = [additional-pod.16f0f0735a9891dc], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: removing the label node off the node node1 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node node2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:24:02.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3828" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:11.293 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":17,"completed":6,"skipped":2112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:24:02.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 20 22:24:02.561: INFO: Waiting up to 1m0s for all nodes to be ready May 20 22:25:02.617: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. May 20 22:25:02.644: INFO: Created pod: pod0-sched-preemption-low-priority May 20 22:25:02.664: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:25:24.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3767" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:82.232 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":17,"completed":7,"skipped":2141,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:25:24.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 22:25:24.791: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 22:25:24.800: INFO: Waiting for terminating namespaces to be deleted... May 20 22:25:24.802: INFO: Logging pods the apiserver thinks is on node node1 before test May 20 22:25:24.812: INFO: cmk-c5x47 from kube-system started at 2022-05-20 20:16:15 +0000 UTC (2 container statuses recorded) May 20 22:25:24.812: INFO: Container nodereport ready: true, restart count 0 May 20 22:25:24.812: INFO: Container reconcile ready: true, restart count 0 May 20 22:25:24.812: INFO: cmk-init-discover-node1-vkzkd from kube-system started at 2022-05-20 20:15:33 +0000 UTC (3 container statuses recorded) May 20 22:25:24.812: INFO: Container discover ready: false, restart count 0 May 20 22:25:24.812: INFO: Container init ready: false, restart count 0 May 20 22:25:24.812: INFO: Container install ready: false, restart count 0 May 20 22:25:24.812: INFO: kube-flannel-2blt7 from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 20 22:25:24.812: INFO: Container kube-flannel ready: true, restart count 3 May 20 22:25:24.812: INFO: kube-multus-ds-amd64-krd6m from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 20 22:25:24.812: INFO: Container kube-multus ready: true, restart count 1 May 20 22:25:24.812: INFO: kube-proxy-v8kzq from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 20 22:25:24.812: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:25:24.812: INFO: kubernetes-dashboard-785dcbb76d-6c2f8 from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 20 22:25:24.812: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 20 22:25:24.812: INFO: nginx-proxy-node1 from kube-system started at 2022-05-20 20:06:57 +0000 UTC (1 container statuses recorded) May 20 22:25:24.812: INFO: Container nginx-proxy ready: true, restart count 2 May 20 22:25:24.812: INFO: node-feature-discovery-worker-rh55h from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 20 22:25:24.812: INFO: Container nfd-worker ready: true, restart count 0 May 20 22:25:24.812: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 20 22:25:24.812: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 22:25:24.812: INFO: collectd-875j8 from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 20 22:25:24.812: INFO: Container collectd ready: true, restart count 0 May 20 22:25:24.812: INFO: Container collectd-exporter ready: true, restart count 0 May 20 22:25:24.812: INFO: Container rbac-proxy ready: true, restart count 0 May 20 22:25:24.812: INFO: node-exporter-czwvh from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 20 22:25:24.812: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:25:24.812: INFO: Container node-exporter ready: true, restart count 0 May 20 22:25:24.812: INFO: prometheus-k8s-0 from monitoring started at 2022-05-20 20:17:30 +0000 UTC (4 container statuses recorded) May 20 22:25:24.812: INFO: Container config-reloader ready: true, restart count 0 May 20 22:25:24.812: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 20 22:25:24.812: INFO: Container grafana ready: true, restart count 0 May 20 22:25:24.812: INFO: Container prometheus ready: true, restart count 1 May 20 22:25:24.812: INFO: preemptor-pod from sched-preemption-3767 started at 2022-05-20 22:25:20 +0000 UTC (1 container statuses recorded) May 20 22:25:24.812: INFO: Container preemptor-pod ready: true, restart count 0 May 20 22:25:24.812: INFO: Logging pods the apiserver thinks is on node node2 before test May 20 22:25:24.820: INFO: cmk-9hxtl from kube-system started at 2022-05-20 20:16:16 +0000 UTC (2 container statuses recorded) May 20 22:25:24.820: INFO: Container nodereport ready: true, restart count 0 May 20 22:25:24.820: INFO: Container reconcile ready: true, restart count 0 May 20 22:25:24.820: INFO: cmk-init-discover-node2-b7gw4 from kube-system started at 2022-05-20 20:15:53 +0000 UTC (3 container statuses recorded) May 20 22:25:24.820: INFO: Container discover ready: false, restart count 0 May 20 22:25:24.820: INFO: Container init ready: false, restart count 0 May 20 22:25:24.820: INFO: Container install ready: false, restart count 0 May 20 22:25:24.820: INFO: cmk-webhook-6c9d5f8578-5kbbc from kube-system started at 2022-05-20 20:16:16 +0000 UTC (1 container statuses recorded) May 20 22:25:24.820: INFO: Container cmk-webhook ready: true, restart count 0 May 20 22:25:24.820: INFO: kube-flannel-jpmpd from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 20 22:25:24.820: INFO: Container kube-flannel ready: true, restart count 2 May 20 22:25:24.820: INFO: kube-multus-ds-amd64-p22zp from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 20 22:25:24.820: INFO: Container kube-multus ready: true, restart count 1 May 20 22:25:24.820: INFO: kube-proxy-rg2fp from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 20 22:25:24.820: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:25:24.820: INFO: kubernetes-metrics-scraper-5558854cb-66r9g from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 20 22:25:24.820: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 20 22:25:24.820: INFO: nginx-proxy-node2 from kube-system started at 2022-05-20 20:03:09 +0000 UTC (1 container statuses recorded) May 20 22:25:24.820: INFO: Container nginx-proxy ready: true, restart count 2 May 20 22:25:24.820: INFO: node-feature-discovery-worker-nphk9 from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 20 22:25:24.820: INFO: Container nfd-worker ready: true, restart count 0 May 20 22:25:24.820: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 20 22:25:24.820: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 22:25:24.820: INFO: collectd-h4pzk from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 20 22:25:24.820: INFO: Container collectd ready: true, restart count 0 May 20 22:25:24.820: INFO: Container collectd-exporter ready: true, restart count 0 May 20 22:25:24.820: INFO: Container rbac-proxy ready: true, restart count 0 May 20 22:25:24.820: INFO: node-exporter-vm24n from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 20 22:25:24.820: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:25:24.820: INFO: Container node-exporter ready: true, restart count 0 May 20 22:25:24.820: INFO: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd from monitoring started at 2022-05-20 20:20:26 +0000 UTC (1 container statuses recorded) May 20 22:25:24.820: INFO: Container tas-extender ready: true, restart count 0 May 20 22:25:24.820: INFO: pod1-sched-preemption-medium-priority from sched-preemption-3767 started at 2022-05-20 22:25:14 +0000 UTC (1 container statuses recorded) May 20 22:25:24.820: INFO: Container pod1-sched-preemption-medium-priority ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d981038d-7614-47f2-ad29-94d5a5e5ed98 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-d981038d-7614-47f2-ad29-94d5a5e5ed98 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-d981038d-7614-47f2-ad29-94d5a5e5ed98 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:25:32.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2080" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.140 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":17,"completed":8,"skipped":2667,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:25:32.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:25:32.950: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 20 22:25:32.956: INFO: Number of nodes with available pods: 0 May 20 22:25:32.956: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 20 22:25:32.973: INFO: Number of nodes with available pods: 0 May 20 22:25:32.973: INFO: Node node1 is running more than one daemon pod May 20 22:25:33.976: INFO: Number of nodes with available pods: 0 May 20 22:25:33.977: INFO: Node node1 is running more than one daemon pod May 20 22:25:34.977: INFO: Number of nodes with available pods: 0 May 20 22:25:34.977: INFO: Node node1 is running more than one daemon pod May 20 22:25:35.980: INFO: Number of nodes with available pods: 1 May 20 22:25:35.980: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 20 22:25:35.996: INFO: Number of nodes with available pods: 1 May 20 22:25:35.996: INFO: Number of running nodes: 0, number of available pods: 1 May 20 22:25:37.001: INFO: Number of nodes with available pods: 0 May 20 22:25:37.001: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 20 22:25:37.009: INFO: Number of nodes with available pods: 0 May 20 22:25:37.009: INFO: Node node1 is running more than one daemon pod May 20 22:25:38.015: INFO: Number of nodes with available pods: 0 May 20 22:25:38.015: INFO: Node node1 is running more than one daemon pod May 20 22:25:39.012: INFO: Number of nodes with available pods: 0 May 20 22:25:39.012: INFO: Node node1 is running more than one daemon pod May 20 22:25:40.014: INFO: Number of nodes with available pods: 0 May 20 22:25:40.014: INFO: Node node1 is running more than one daemon pod May 20 22:25:41.016: INFO: Number of nodes with available pods: 0 May 20 22:25:41.016: INFO: Node node1 is running more than one daemon pod May 20 22:25:42.015: INFO: Number of nodes with available pods: 0 May 20 22:25:42.015: INFO: Node node1 is running more than one daemon pod May 20 22:25:43.015: INFO: Number of nodes with available pods: 0 May 20 22:25:43.015: INFO: Node node1 is running more than one daemon pod May 20 22:25:44.014: INFO: Number of nodes with available pods: 0 May 20 22:25:44.014: INFO: Node node1 is running more than one daemon pod May 20 22:25:45.015: INFO: Number of nodes with available pods: 0 May 20 22:25:45.015: INFO: Node node1 is running more than one daemon pod May 20 22:25:46.015: INFO: Number of nodes with available pods: 0 May 20 22:25:46.015: INFO: Node node1 is running more than one daemon pod May 20 22:25:47.015: INFO: Number of nodes with available pods: 0 May 20 22:25:47.015: INFO: Node node1 is running more than one daemon pod May 20 22:25:48.015: INFO: Number of nodes with available pods: 0 May 20 22:25:48.015: INFO: Node node1 is running more than one daemon pod May 20 22:25:49.014: INFO: Number of nodes with available pods: 0 May 20 22:25:49.014: INFO: Node node1 is running more than one daemon pod May 20 22:25:50.013: INFO: Number of nodes with available pods: 1 May 20 22:25:50.013: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6324, will wait for the garbage collector to delete the pods May 20 22:25:50.076: INFO: Deleting DaemonSet.extensions daemon-set took: 4.449552ms May 20 22:25:50.177: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.990348ms May 20 22:25:55.880: INFO: Number of nodes with available pods: 0 May 20 22:25:55.880: INFO: Number of running nodes: 0, number of available pods: 0 May 20 22:25:55.883: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"53411"},"items":null} May 20 22:25:55.886: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"53411"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:25:55.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6324" for this suite. • [SLOW TEST:23.002 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":17,"completed":9,"skipped":2934,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:25:55.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 20 22:25:55.973: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:25:55.973: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:25:55.973: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:25:55.978: INFO: Number of nodes with available pods: 0 May 20 22:25:55.978: INFO: Node node1 is running more than one daemon pod May 20 22:25:56.983: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:25:56.983: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:25:56.983: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:25:56.987: INFO: Number of nodes with available pods: 0 May 20 22:25:56.987: INFO: Node node1 is running more than one daemon pod May 20 22:25:57.985: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:25:57.985: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:25:57.985: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:25:57.990: INFO: Number of nodes with available pods: 0 May 20 22:25:57.990: INFO: Node node1 is running more than one daemon pod May 20 22:25:58.984: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:25:58.984: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:25:58.984: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:25:58.987: INFO: Number of nodes with available pods: 1 May 20 22:25:58.987: INFO: Node node1 is running more than one daemon pod May 20 22:25:59.986: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:25:59.986: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:25:59.986: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:25:59.989: INFO: Number of nodes with available pods: 2 May 20 22:25:59.989: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 20 22:26:00.005: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:00.005: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:00.005: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:00.008: INFO: Number of nodes with available pods: 1 May 20 22:26:00.008: INFO: Node node1 is running more than one daemon pod May 20 22:26:01.014: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:01.014: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:01.014: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:01.016: INFO: Number of nodes with available pods: 1 May 20 22:26:01.016: INFO: Node node1 is running more than one daemon pod May 20 22:26:02.013: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:02.013: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:02.013: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:02.016: INFO: Number of nodes with available pods: 1 May 20 22:26:02.016: INFO: Node node1 is running more than one daemon pod May 20 22:26:03.016: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:03.016: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:03.016: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:03.018: INFO: Number of nodes with available pods: 1 May 20 22:26:03.018: INFO: Node node1 is running more than one daemon pod May 20 22:26:04.015: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:04.015: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:04.015: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:04.018: INFO: Number of nodes with available pods: 2 May 20 22:26:04.018: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1557, will wait for the garbage collector to delete the pods May 20 22:26:04.083: INFO: Deleting DaemonSet.extensions daemon-set took: 5.381553ms May 20 22:26:04.183: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.357664ms May 20 22:26:16.887: INFO: Number of nodes with available pods: 0 May 20 22:26:16.887: INFO: Number of running nodes: 0, number of available pods: 0 May 20 22:26:16.890: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"53545"},"items":null} May 20 22:26:16.893: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"53545"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:26:16.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1557" for this suite. • [SLOW TEST:20.998 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":17,"completed":10,"skipped":3326,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:26:16.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:26:16.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4142" for this suite. STEP: Destroying namespace "nspatchtest-7f48b7f1-a274-446e-9a7c-5b594f8d5f8c-4038" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":17,"completed":11,"skipped":3379,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:26:17.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:26:17.050: INFO: Create a RollingUpdate DaemonSet May 20 22:26:17.054: INFO: Check that daemon pods launch on every node of the cluster May 20 22:26:17.060: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:17.060: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:17.060: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:17.062: INFO: Number of nodes with available pods: 0 May 20 22:26:17.062: INFO: Node node1 is running more than one daemon pod May 20 22:26:18.067: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:18.067: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:18.067: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:18.070: INFO: Number of nodes with available pods: 0 May 20 22:26:18.070: INFO: Node node1 is running more than one daemon pod May 20 22:26:19.067: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:19.067: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:19.067: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:19.071: INFO: Number of nodes with available pods: 0 May 20 22:26:19.071: INFO: Node node1 is running more than one daemon pod May 20 22:26:20.070: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:20.071: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:20.071: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:20.073: INFO: Number of nodes with available pods: 0 May 20 22:26:20.073: INFO: Node node1 is running more than one daemon pod May 20 22:26:21.070: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:21.070: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:21.070: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:21.072: INFO: Number of nodes with available pods: 2 May 20 22:26:21.072: INFO: Number of running nodes: 2, number of available pods: 2 May 20 22:26:21.072: INFO: Update the DaemonSet to trigger a rollout May 20 22:26:21.081: INFO: Updating DaemonSet daemon-set May 20 22:26:37.096: INFO: Roll back the DaemonSet before rollout is complete May 20 22:26:37.103: INFO: Updating DaemonSet daemon-set May 20 22:26:37.103: INFO: Make sure DaemonSet rollback is complete May 20 22:26:37.106: INFO: Wrong image for pod: daemon-set-kg4rm. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. May 20 22:26:37.106: INFO: Pod daemon-set-kg4rm is not available May 20 22:26:37.110: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:37.111: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:37.111: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:38.122: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:38.122: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:38.122: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:39.118: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:39.118: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:39.118: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:40.116: INFO: Pod daemon-set-t7775 is not available May 20 22:26:40.121: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:40.121: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:26:40.121: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-360, will wait for the garbage collector to delete the pods May 20 22:26:40.184: INFO: Deleting DaemonSet.extensions daemon-set took: 5.746531ms May 20 22:26:40.285: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.114897ms May 20 22:26:46.689: INFO: Number of nodes with available pods: 0 May 20 22:26:46.689: INFO: Number of running nodes: 0, number of available pods: 0 May 20 22:26:46.691: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"53763"},"items":null} May 20 22:26:46.694: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"53763"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:26:46.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-360" for this suite. • [SLOW TEST:29.706 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":17,"completed":12,"skipped":4036,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:26:46.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 22:26:46.742: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 22:26:46.750: INFO: Waiting for terminating namespaces to be deleted... May 20 22:26:46.753: INFO: Logging pods the apiserver thinks is on node node1 before test May 20 22:26:46.769: INFO: cmk-c5x47 from kube-system started at 2022-05-20 20:16:15 +0000 UTC (2 container statuses recorded) May 20 22:26:46.769: INFO: Container nodereport ready: true, restart count 0 May 20 22:26:46.769: INFO: Container reconcile ready: true, restart count 0 May 20 22:26:46.769: INFO: cmk-init-discover-node1-vkzkd from kube-system started at 2022-05-20 20:15:33 +0000 UTC (3 container statuses recorded) May 20 22:26:46.769: INFO: Container discover ready: false, restart count 0 May 20 22:26:46.769: INFO: Container init ready: false, restart count 0 May 20 22:26:46.769: INFO: Container install ready: false, restart count 0 May 20 22:26:46.769: INFO: kube-flannel-2blt7 from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 20 22:26:46.769: INFO: Container kube-flannel ready: true, restart count 3 May 20 22:26:46.769: INFO: kube-multus-ds-amd64-krd6m from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 20 22:26:46.769: INFO: Container kube-multus ready: true, restart count 1 May 20 22:26:46.769: INFO: kube-proxy-v8kzq from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 20 22:26:46.769: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:26:46.769: INFO: kubernetes-dashboard-785dcbb76d-6c2f8 from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 20 22:26:46.769: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 20 22:26:46.769: INFO: nginx-proxy-node1 from kube-system started at 2022-05-20 20:06:57 +0000 UTC (1 container statuses recorded) May 20 22:26:46.769: INFO: Container nginx-proxy ready: true, restart count 2 May 20 22:26:46.769: INFO: node-feature-discovery-worker-rh55h from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 20 22:26:46.769: INFO: Container nfd-worker ready: true, restart count 0 May 20 22:26:46.769: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 20 22:26:46.769: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 22:26:46.769: INFO: collectd-875j8 from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 20 22:26:46.769: INFO: Container collectd ready: true, restart count 0 May 20 22:26:46.769: INFO: Container collectd-exporter ready: true, restart count 0 May 20 22:26:46.769: INFO: Container rbac-proxy ready: true, restart count 0 May 20 22:26:46.769: INFO: node-exporter-czwvh from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 20 22:26:46.769: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:26:46.769: INFO: Container node-exporter ready: true, restart count 0 May 20 22:26:46.769: INFO: prometheus-k8s-0 from monitoring started at 2022-05-20 20:17:30 +0000 UTC (4 container statuses recorded) May 20 22:26:46.769: INFO: Container config-reloader ready: true, restart count 0 May 20 22:26:46.769: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 20 22:26:46.769: INFO: Container grafana ready: true, restart count 0 May 20 22:26:46.769: INFO: Container prometheus ready: true, restart count 1 May 20 22:26:46.769: INFO: Logging pods the apiserver thinks is on node node2 before test May 20 22:26:46.787: INFO: cmk-9hxtl from kube-system started at 2022-05-20 20:16:16 +0000 UTC (2 container statuses recorded) May 20 22:26:46.787: INFO: Container nodereport ready: true, restart count 0 May 20 22:26:46.787: INFO: Container reconcile ready: true, restart count 0 May 20 22:26:46.787: INFO: cmk-init-discover-node2-b7gw4 from kube-system started at 2022-05-20 20:15:53 +0000 UTC (3 container statuses recorded) May 20 22:26:46.788: INFO: Container discover ready: false, restart count 0 May 20 22:26:46.788: INFO: Container init ready: false, restart count 0 May 20 22:26:46.788: INFO: Container install ready: false, restart count 0 May 20 22:26:46.788: INFO: cmk-webhook-6c9d5f8578-5kbbc from kube-system started at 2022-05-20 20:16:16 +0000 UTC (1 container statuses recorded) May 20 22:26:46.788: INFO: Container cmk-webhook ready: true, restart count 0 May 20 22:26:46.788: INFO: kube-flannel-jpmpd from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 20 22:26:46.788: INFO: Container kube-flannel ready: true, restart count 2 May 20 22:26:46.788: INFO: kube-multus-ds-amd64-p22zp from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 20 22:26:46.788: INFO: Container kube-multus ready: true, restart count 1 May 20 22:26:46.788: INFO: kube-proxy-rg2fp from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 20 22:26:46.788: INFO: Container kube-proxy ready: true, restart count 2 May 20 22:26:46.788: INFO: kubernetes-metrics-scraper-5558854cb-66r9g from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 20 22:26:46.788: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 20 22:26:46.788: INFO: nginx-proxy-node2 from kube-system started at 2022-05-20 20:03:09 +0000 UTC (1 container statuses recorded) May 20 22:26:46.788: INFO: Container nginx-proxy ready: true, restart count 2 May 20 22:26:46.788: INFO: node-feature-discovery-worker-nphk9 from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 20 22:26:46.788: INFO: Container nfd-worker ready: true, restart count 0 May 20 22:26:46.788: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 20 22:26:46.788: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 22:26:46.788: INFO: collectd-h4pzk from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 20 22:26:46.788: INFO: Container collectd ready: true, restart count 0 May 20 22:26:46.788: INFO: Container collectd-exporter ready: true, restart count 0 May 20 22:26:46.788: INFO: Container rbac-proxy ready: true, restart count 0 May 20 22:26:46.788: INFO: node-exporter-vm24n from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 20 22:26:46.788: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 22:26:46.788: INFO: Container node-exporter ready: true, restart count 0 May 20 22:26:46.788: INFO: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd from monitoring started at 2022-05-20 20:20:26 +0000 UTC (1 container statuses recorded) May 20 22:26:46.788: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16f0f099d988726d], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:26:47.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3640" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":17,"completed":13,"skipped":4168,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:26:47.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 20 22:26:47.860: INFO: Waiting up to 1m0s for all nodes to be ready May 20 22:27:47.917: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:27:47.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 20 22:27:51.979: INFO: found a healthy node: node1 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 22:28:12.036: INFO: pods created so far: [1 1 1] May 20 22:28:12.036: INFO: length of pods created so far: 3 May 20 22:28:16.050: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:28:23.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-3739" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:28:23.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1972" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:95.298 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":17,"completed":14,"skipped":4258,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:28:23.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 20 22:28:23.447: INFO: Pod name wrapped-volume-race-b2b49db3-421c-45ca-81c7-20aa823f4e89: Found 3 pods out of 5 May 20 22:28:28.457: INFO: Pod name wrapped-volume-race-b2b49db3-421c-45ca-81c7-20aa823f4e89: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b2b49db3-421c-45ca-81c7-20aa823f4e89 in namespace emptydir-wrapper-1308, will wait for the garbage collector to delete the pods May 20 22:28:44.543: INFO: Deleting ReplicationController wrapped-volume-race-b2b49db3-421c-45ca-81c7-20aa823f4e89 took: 6.155846ms May 20 22:28:44.644: INFO: Terminating ReplicationController wrapped-volume-race-b2b49db3-421c-45ca-81c7-20aa823f4e89 pods took: 100.587256ms STEP: Creating RC which spawns configmap-volume pods May 20 22:28:55.963: INFO: Pod name wrapped-volume-race-d344b6d5-be51-4013-9261-1a4b7ff2413d: Found 0 pods out of 5 May 20 22:29:00.976: INFO: Pod name wrapped-volume-race-d344b6d5-be51-4013-9261-1a4b7ff2413d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d344b6d5-be51-4013-9261-1a4b7ff2413d in namespace emptydir-wrapper-1308, will wait for the garbage collector to delete the pods May 20 22:29:15.059: INFO: Deleting ReplicationController wrapped-volume-race-d344b6d5-be51-4013-9261-1a4b7ff2413d took: 6.254187ms May 20 22:29:15.160: INFO: Terminating ReplicationController wrapped-volume-race-d344b6d5-be51-4013-9261-1a4b7ff2413d pods took: 101.013531ms STEP: Creating RC which spawns configmap-volume pods May 20 22:29:22.076: INFO: Pod name wrapped-volume-race-15c73278-fe2a-4b85-b1b1-f145782a3863: Found 0 pods out of 5 May 20 22:29:27.089: INFO: Pod name wrapped-volume-race-15c73278-fe2a-4b85-b1b1-f145782a3863: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-15c73278-fe2a-4b85-b1b1-f145782a3863 in namespace emptydir-wrapper-1308, will wait for the garbage collector to delete the pods May 20 22:29:37.174: INFO: Deleting ReplicationController wrapped-volume-race-15c73278-fe2a-4b85-b1b1-f145782a3863 took: 5.895291ms May 20 22:29:37.274: INFO: Terminating ReplicationController wrapped-volume-race-15c73278-fe2a-4b85-b1b1-f145782a3863 pods took: 100.360799ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:29:46.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1308" for this suite. • [SLOW TEST:83.733 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":17,"completed":15,"skipped":5059,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:29:46.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 20 22:29:46.936: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:46.936: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:46.936: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:46.940: INFO: Number of nodes with available pods: 0 May 20 22:29:46.940: INFO: Node node1 is running more than one daemon pod May 20 22:29:47.947: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:47.947: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:47.947: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:47.950: INFO: Number of nodes with available pods: 0 May 20 22:29:47.950: INFO: Node node1 is running more than one daemon pod May 20 22:29:48.948: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:48.948: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:48.948: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:48.950: INFO: Number of nodes with available pods: 0 May 20 22:29:48.950: INFO: Node node1 is running more than one daemon pod May 20 22:29:49.948: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:49.949: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:49.949: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:49.953: INFO: Number of nodes with available pods: 2 May 20 22:29:49.953: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 20 22:29:49.969: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:49.969: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:49.969: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:49.973: INFO: Number of nodes with available pods: 1 May 20 22:29:49.973: INFO: Node node2 is running more than one daemon pod May 20 22:29:50.980: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:50.980: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:50.980: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:50.982: INFO: Number of nodes with available pods: 1 May 20 22:29:50.983: INFO: Node node2 is running more than one daemon pod May 20 22:29:51.980: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:51.981: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:51.981: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:51.983: INFO: Number of nodes with available pods: 1 May 20 22:29:51.983: INFO: Node node2 is running more than one daemon pod May 20 22:29:52.978: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:52.978: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:52.978: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:52.982: INFO: Number of nodes with available pods: 1 May 20 22:29:52.982: INFO: Node node2 is running more than one daemon pod May 20 22:29:53.979: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:53.979: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:53.979: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:53.981: INFO: Number of nodes with available pods: 1 May 20 22:29:53.981: INFO: Node node2 is running more than one daemon pod May 20 22:29:54.980: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:54.980: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:54.980: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:54.983: INFO: Number of nodes with available pods: 1 May 20 22:29:54.983: INFO: Node node2 is running more than one daemon pod May 20 22:29:55.981: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:55.982: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:55.982: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:55.984: INFO: Number of nodes with available pods: 1 May 20 22:29:55.985: INFO: Node node2 is running more than one daemon pod May 20 22:29:56.980: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:56.980: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:56.980: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:56.982: INFO: Number of nodes with available pods: 1 May 20 22:29:56.982: INFO: Node node2 is running more than one daemon pod May 20 22:29:57.978: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:57.978: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:57.978: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:57.983: INFO: Number of nodes with available pods: 1 May 20 22:29:57.983: INFO: Node node2 is running more than one daemon pod May 20 22:29:58.980: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:58.980: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:58.980: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:58.983: INFO: Number of nodes with available pods: 1 May 20 22:29:58.983: INFO: Node node2 is running more than one daemon pod May 20 22:29:59.980: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:59.980: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:59.980: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 22:29:59.982: INFO: Number of nodes with available pods: 2 May 20 22:29:59.982: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9577, will wait for the garbage collector to delete the pods May 20 22:30:00.042: INFO: Deleting DaemonSet.extensions daemon-set took: 4.513012ms May 20 22:30:00.143: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.583971ms May 20 22:30:06.845: INFO: Number of nodes with available pods: 0 May 20 22:30:06.846: INFO: Number of running nodes: 0, number of available pods: 0 May 20 22:30:06.847: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"55435"},"items":null} May 20 22:30:06.849: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"55435"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:30:06.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9577" for this suite. • [SLOW TEST:19.988 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":17,"completed":16,"skipped":5641,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 22:30:06.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 20 22:30:06.905: INFO: Waiting up to 1m0s for all nodes to be ready May 20 22:31:06.964: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. May 20 22:31:06.991: INFO: Created pod: pod0-sched-preemption-low-priority May 20 22:31:07.012: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 22:31:39.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2465" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:92.240 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":17,"completed":17,"skipped":5709,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 20 22:31:39.113: INFO: Running AfterSuite actions on all nodes May 20 22:31:39.113: INFO: Running AfterSuite actions on node 1 May 20 22:31:39.113: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":17,"skipped":5756,"failed":0} Ran 17 of 5773 Specs in 905.425 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5756 Skipped PASS Ginkgo ran 1 suite in 15m6.826983168s Test Suite Passed