I1023 01:54:01.199329 22 e2e.go:129] Starting e2e run "f63e5c46-d4e5-4eb4-b7af-0d6d09513612" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1634954039 - Will randomize all specs Will run 17 of 5770 specs Oct 23 01:54:01.258: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:54:01.263: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 23 01:54:01.291: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 23 01:54:01.357: INFO: The status of Pod cmk-init-discover-node1-c599w is Succeeded, skipping waiting Oct 23 01:54:01.357: INFO: The status of Pod cmk-init-discover-node2-2btnq is Succeeded, skipping waiting Oct 23 01:54:01.357: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 23 01:54:01.357: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 23 01:54:01.357: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 23 01:54:01.375: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Oct 23 01:54:01.375: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Oct 23 01:54:01.375: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Oct 23 01:54:01.375: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Oct 23 01:54:01.375: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Oct 23 01:54:01.375: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Oct 23 01:54:01.375: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Oct 23 01:54:01.375: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 23 01:54:01.375: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Oct 23 01:54:01.375: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Oct 23 01:54:01.375: INFO: e2e test version: v1.21.5 Oct 23 01:54:01.376: INFO: kube-apiserver version: v1.21.1 Oct 23 01:54:01.377: INFO: >>> kubeConfig: /root/.kube/config Oct 23 01:54:01.382: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:54:01.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets W1023 01:54:01.415193 22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 01:54:01.415: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 01:54:01.418: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:54:01.437: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Oct 23 01:54:01.445: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:01.445: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:01.445: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:01.448: INFO: Number of nodes with available pods: 0 Oct 23 01:54:01.448: INFO: Node node1 is running more than one daemon pod Oct 23 01:54:02.454: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:02.454: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:02.454: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:02.457: INFO: Number of nodes with available pods: 0 Oct 23 01:54:02.457: INFO: Node node1 is running more than one daemon pod Oct 23 01:54:03.454: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:03.454: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:03.454: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:03.457: INFO: Number of nodes with available pods: 0 Oct 23 01:54:03.457: INFO: Node node1 is running more than one daemon pod Oct 23 01:54:04.454: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:04.454: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:04.454: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:04.457: INFO: Number of nodes with available pods: 1 Oct 23 01:54:04.457: INFO: Node node1 is running more than one daemon pod Oct 23 01:54:05.453: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:05.454: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:05.454: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:05.456: INFO: Number of nodes with available pods: 2 Oct 23 01:54:05.456: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Oct 23 01:54:05.482: INFO: Wrong image for pod: daemon-set-f4f4v. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:54:05.482: INFO: Wrong image for pod: daemon-set-vf9vr. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:54:05.487: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:05.487: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:05.487: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:06.491: INFO: Wrong image for pod: daemon-set-vf9vr. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:54:06.496: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:06.496: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:06.496: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:07.492: INFO: Wrong image for pod: daemon-set-vf9vr. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:54:07.496: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:07.496: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:07.496: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:08.491: INFO: Wrong image for pod: daemon-set-vf9vr. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:54:08.496: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:08.496: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:08.496: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:09.493: INFO: Pod daemon-set-jlhcm is not available Oct 23 01:54:09.493: INFO: Wrong image for pod: daemon-set-vf9vr. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:54:09.501: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:09.501: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:09.502: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:10.492: INFO: Pod daemon-set-jlhcm is not available Oct 23 01:54:10.492: INFO: Wrong image for pod: daemon-set-vf9vr. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 23 01:54:10.497: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:10.497: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:10.497: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:11.496: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:11.496: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:11.496: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:12.497: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:12.497: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:12.497: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:13.499: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:13.499: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:13.499: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:14.498: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:14.498: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:14.498: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:15.497: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:15.497: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:15.497: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:16.496: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:16.496: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:16.496: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:17.496: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:17.496: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:17.496: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:18.499: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:18.499: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:18.499: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:19.499: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:19.499: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:19.499: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:20.496: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:20.496: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:20.496: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:21.495: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:21.495: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:21.495: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:22.500: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:22.500: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:22.500: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:23.498: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:23.498: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:23.498: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:24.493: INFO: Pod daemon-set-brhc5 is not available Oct 23 01:54:24.497: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:24.498: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:24.498: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Oct 23 01:54:24.502: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:24.502: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:24.502: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:24.505: INFO: Number of nodes with available pods: 1 Oct 23 01:54:24.505: INFO: Node node2 is running more than one daemon pod Oct 23 01:54:25.512: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:25.512: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:25.512: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:25.515: INFO: Number of nodes with available pods: 1 Oct 23 01:54:25.515: INFO: Node node2 is running more than one daemon pod Oct 23 01:54:26.511: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:26.511: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:26.511: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:26.514: INFO: Number of nodes with available pods: 1 Oct 23 01:54:26.514: INFO: Node node2 is running more than one daemon pod Oct 23 01:54:27.511: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:27.511: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:27.511: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:27.514: INFO: Number of nodes with available pods: 2 Oct 23 01:54:27.514: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3008, will wait for the garbage collector to delete the pods Oct 23 01:54:27.588: INFO: Deleting DaemonSet.extensions daemon-set took: 5.497206ms Oct 23 01:54:27.788: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.250542ms Oct 23 01:54:34.291: INFO: Number of nodes with available pods: 0 Oct 23 01:54:34.291: INFO: Number of running nodes: 0, number of available pods: 0 Oct 23 01:54:34.294: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"108626"},"items":null} Oct 23 01:54:34.297: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"108626"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:54:34.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3008" for this suite. • [SLOW TEST:32.930 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":17,"completed":1,"skipped":523,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:54:34.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 23 01:54:34.371: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:34.371: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:34.371: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:34.374: INFO: Number of nodes with available pods: 0 Oct 23 01:54:34.374: INFO: Node node1 is running more than one daemon pod Oct 23 01:54:35.380: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:35.380: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:35.380: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:35.383: INFO: Number of nodes with available pods: 0 Oct 23 01:54:35.383: INFO: Node node1 is running more than one daemon pod Oct 23 01:54:36.379: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:36.379: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:36.379: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:36.382: INFO: Number of nodes with available pods: 0 Oct 23 01:54:36.382: INFO: Node node1 is running more than one daemon pod Oct 23 01:54:37.380: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:37.380: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:37.380: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:37.383: INFO: Number of nodes with available pods: 2 Oct 23 01:54:37.383: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Oct 23 01:54:37.398: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:37.398: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:37.398: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:37.401: INFO: Number of nodes with available pods: 1 Oct 23 01:54:37.401: INFO: Node node1 is running more than one daemon pod Oct 23 01:54:38.408: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:38.408: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:38.408: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:38.411: INFO: Number of nodes with available pods: 1 Oct 23 01:54:38.411: INFO: Node node1 is running more than one daemon pod Oct 23 01:54:39.409: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:39.409: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:39.409: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:39.411: INFO: Number of nodes with available pods: 1 Oct 23 01:54:39.411: INFO: Node node1 is running more than one daemon pod Oct 23 01:54:40.407: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:40.408: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:40.408: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:40.410: INFO: Number of nodes with available pods: 1 Oct 23 01:54:40.410: INFO: Node node1 is running more than one daemon pod Oct 23 01:54:41.406: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:41.406: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:41.406: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:41.409: INFO: Number of nodes with available pods: 1 Oct 23 01:54:41.409: INFO: Node node1 is running more than one daemon pod Oct 23 01:54:42.408: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:42.408: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:42.408: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:42.411: INFO: Number of nodes with available pods: 1 Oct 23 01:54:42.411: INFO: Node node1 is running more than one daemon pod Oct 23 01:54:43.409: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:43.409: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:43.409: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:43.412: INFO: Number of nodes with available pods: 1 Oct 23 01:54:43.412: INFO: Node node1 is running more than one daemon pod Oct 23 01:54:44.410: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:44.410: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:44.410: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:44.414: INFO: Number of nodes with available pods: 1 Oct 23 01:54:44.414: INFO: Node node1 is running more than one daemon pod Oct 23 01:54:45.406: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:45.406: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:45.407: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:45.409: INFO: Number of nodes with available pods: 1 Oct 23 01:54:45.409: INFO: Node node1 is running more than one daemon pod Oct 23 01:54:46.406: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:46.406: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:46.406: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:46.410: INFO: Number of nodes with available pods: 1 Oct 23 01:54:46.410: INFO: Node node1 is running more than one daemon pod Oct 23 01:54:47.408: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:47.409: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:47.409: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:47.412: INFO: Number of nodes with available pods: 2 Oct 23 01:54:47.412: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8881, will wait for the garbage collector to delete the pods Oct 23 01:54:47.473: INFO: Deleting DaemonSet.extensions daemon-set took: 4.31747ms Oct 23 01:54:47.574: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.869329ms Oct 23 01:54:52.378: INFO: Number of nodes with available pods: 0 Oct 23 01:54:52.378: INFO: Number of running nodes: 0, number of available pods: 0 Oct 23 01:54:52.380: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"108769"},"items":null} Oct 23 01:54:52.382: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"108769"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:54:52.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8881" for this suite. • [SLOW TEST:18.082 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":17,"completed":2,"skipped":533,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:54:52.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 01:54:52.447: INFO: Create a RollingUpdate DaemonSet Oct 23 01:54:52.453: INFO: Check that daemon pods launch on every node of the cluster Oct 23 01:54:52.459: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:52.459: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:52.459: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:52.465: INFO: Number of nodes with available pods: 0 Oct 23 01:54:52.465: INFO: Node node1 is running more than one daemon pod Oct 23 01:54:53.470: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:53.470: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:53.470: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:53.472: INFO: Number of nodes with available pods: 0 Oct 23 01:54:53.472: INFO: Node node1 is running more than one daemon pod Oct 23 01:54:54.471: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:54.471: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:54.471: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:54.474: INFO: Number of nodes with available pods: 0 Oct 23 01:54:54.474: INFO: Node node1 is running more than one daemon pod Oct 23 01:54:55.471: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:55.472: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:55.472: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:55.474: INFO: Number of nodes with available pods: 1 Oct 23 01:54:55.474: INFO: Node node1 is running more than one daemon pod Oct 23 01:54:56.472: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:56.472: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:56.472: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:54:56.475: INFO: Number of nodes with available pods: 2 Oct 23 01:54:56.475: INFO: Number of running nodes: 2, number of available pods: 2 Oct 23 01:54:56.475: INFO: Update the DaemonSet to trigger a rollout Oct 23 01:54:56.483: INFO: Updating DaemonSet daemon-set Oct 23 01:55:04.497: INFO: Roll back the DaemonSet before rollout is complete Oct 23 01:55:04.505: INFO: Updating DaemonSet daemon-set Oct 23 01:55:04.505: INFO: Make sure DaemonSet rollback is complete Oct 23 01:55:04.508: INFO: Wrong image for pod: daemon-set-qfbbk. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Oct 23 01:55:04.508: INFO: Pod daemon-set-qfbbk is not available Oct 23 01:55:04.513: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:55:04.513: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:55:04.513: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:55:05.522: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:55:05.522: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:55:05.522: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:55:06.523: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:55:06.523: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:55:06.523: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:55:07.524: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:55:07.524: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:55:07.524: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:55:08.517: INFO: Pod daemon-set-2mchd is not available Oct 23 01:55:08.523: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:55:08.523: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 01:55:08.523: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3883, will wait for the garbage collector to delete the pods Oct 23 01:55:08.586: INFO: Deleting DaemonSet.extensions daemon-set took: 4.068142ms Oct 23 01:55:08.687: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.234494ms Oct 23 01:55:14.291: INFO: Number of nodes with available pods: 0 Oct 23 01:55:14.291: INFO: Number of running nodes: 0, number of available pods: 0 Oct 23 01:55:14.294: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"108934"},"items":null} Oct 23 01:55:14.297: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"108934"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:55:14.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3883" for this suite. • [SLOW TEST:21.919 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":17,"completed":3,"skipped":624,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:55:14.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Oct 23 01:55:14.365: INFO: Waiting up to 1m0s for all nodes to be ready Oct 23 01:56:14.425: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Oct 23 01:56:14.458: INFO: Created pod: pod0-sched-preemption-low-priority Oct 23 01:56:14.477: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 01:56:38.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2323" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:84.239 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":17,"completed":4,"skipped":955,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 01:56:38.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 01:56:38.589: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 01:56:38.596: INFO: Waiting for terminating namespaces to be deleted... Oct 23 01:56:38.599: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 01:56:38.609: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 01:56:38.609: INFO: Container discover ready: false, restart count 0 Oct 23 01:56:38.609: INFO: Container init ready: false, restart count 0 Oct 23 01:56:38.609: INFO: Container install ready: false, restart count 0 Oct 23 01:56:38.609: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 01:56:38.609: INFO: Container nodereport ready: true, restart count 0 Oct 23 01:56:38.609: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:56:38.609: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 01:56:38.609: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 01:56:38.609: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 01:56:38.609: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:56:38.609: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 01:56:38.609: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:56:38.609: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 01:56:38.609: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 01:56:38.609: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 01:56:38.609: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 01:56:38.609: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 01:56:38.609: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:56:38.609: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 01:56:38.609: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:56:38.609: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 01:56:38.609: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:56:38.609: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 01:56:38.609: INFO: Container collectd ready: true, restart count 0 Oct 23 01:56:38.609: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:56:38.609: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:56:38.609: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 01:56:38.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:56:38.609: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:56:38.609: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 01:56:38.609: INFO: Container config-reloader ready: true, restart count 0 Oct 23 01:56:38.610: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 01:56:38.610: INFO: Container grafana ready: true, restart count 0 Oct 23 01:56:38.610: INFO: Container prometheus ready: true, restart count 1 Oct 23 01:56:38.610: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 01:56:38.610: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:56:38.610: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 01:56:38.610: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 01:56:38.619: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 01:56:38.619: INFO: Container discover ready: false, restart count 0 Oct 23 01:56:38.620: INFO: Container init ready: false, restart count 0 Oct 23 01:56:38.620: INFO: Container install ready: false, restart count 0 Oct 23 01:56:38.620: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 01:56:38.620: INFO: Container nodereport ready: true, restart count 1 Oct 23 01:56:38.620: INFO: Container reconcile ready: true, restart count 0 Oct 23 01:56:38.620: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 01:56:38.620: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 01:56:38.620: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 01:56:38.620: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 01:56:38.620: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 01:56:38.620: INFO: Container kube-multus ready: true, restart count 1 Oct 23 01:56:38.620: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 01:56:38.620: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 01:56:38.620: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 01:56:38.620: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 01:56:38.620: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 01:56:38.620: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 01:56:38.620: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 01:56:38.620: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 01:56:38.620: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 01:56:38.620: INFO: Container collectd ready: true, restart count 0 Oct 23 01:56:38.620: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 01:56:38.620: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 01:56:38.620: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 01:56:38.620: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 01:56:38.620: INFO: Container node-exporter ready: true, restart count 0 Oct 23 01:56:38.620: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 01:56:38.620: INFO: Container tas-extender ready: true, restart count 0 Oct 23 01:56:38.620: INFO: pod1-sched-preemption-medium-priority from sched-preemption-2323 started at 2021-10-23 01:56:17 +0000 UTC (1 container statuses recorded) Oct 23 01:56:38.620: INFO: Container pod1-sched-preemption-medium-priority ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5141fcfa-62cc-45dd-8e91-4db9fa2927d7 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.10.190.207 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-5141fcfa-62cc-45dd-8e91-4db9fa2927d7 off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-5141fcfa-62cc-45dd-8e91-4db9fa2927d7 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 02:01:54.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5841" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:316.161 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":17,"completed":5,"skipped":976,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 02:01:54.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 02:02:09.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7620" for this suite. STEP: Destroying namespace "nsdeletetest-387" for this suite. Oct 23 02:02:09.868: INFO: Namespace nsdeletetest-387 was already deleted STEP: Destroying namespace "nsdeletetest-4885" for this suite. • [SLOW TEST:15.141 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":17,"completed":6,"skipped":1201,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 02:02:09.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 02:02:09.893: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 02:02:09.905: INFO: Waiting for terminating namespaces to be deleted... Oct 23 02:02:09.913: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 02:02:09.933: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 02:02:09.933: INFO: Container discover ready: false, restart count 0 Oct 23 02:02:09.933: INFO: Container init ready: false, restart count 0 Oct 23 02:02:09.933: INFO: Container install ready: false, restart count 0 Oct 23 02:02:09.933: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 02:02:09.933: INFO: Container nodereport ready: true, restart count 0 Oct 23 02:02:09.933: INFO: Container reconcile ready: true, restart count 0 Oct 23 02:02:09.933: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 02:02:09.933: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 02:02:09.933: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 02:02:09.933: INFO: Container kube-multus ready: true, restart count 1 Oct 23 02:02:09.933: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 02:02:09.933: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 02:02:09.933: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 02:02:09.933: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 02:02:09.933: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 02:02:09.933: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 02:02:09.933: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 02:02:09.933: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 02:02:09.933: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 02:02:09.933: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 02:02:09.933: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 02:02:09.933: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 02:02:09.933: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 02:02:09.933: INFO: Container collectd ready: true, restart count 0 Oct 23 02:02:09.933: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 02:02:09.934: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 02:02:09.934: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 02:02:09.934: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 02:02:09.934: INFO: Container node-exporter ready: true, restart count 0 Oct 23 02:02:09.934: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 02:02:09.934: INFO: Container config-reloader ready: true, restart count 0 Oct 23 02:02:09.934: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 02:02:09.934: INFO: Container grafana ready: true, restart count 0 Oct 23 02:02:09.934: INFO: Container prometheus ready: true, restart count 1 Oct 23 02:02:09.934: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 02:02:09.934: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 02:02:09.934: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 02:02:09.934: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 02:02:09.941: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 02:02:09.941: INFO: Container discover ready: false, restart count 0 Oct 23 02:02:09.941: INFO: Container init ready: false, restart count 0 Oct 23 02:02:09.941: INFO: Container install ready: false, restart count 0 Oct 23 02:02:09.941: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 02:02:09.941: INFO: Container nodereport ready: true, restart count 1 Oct 23 02:02:09.941: INFO: Container reconcile ready: true, restart count 0 Oct 23 02:02:09.941: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 02:02:09.941: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 02:02:09.941: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 02:02:09.941: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 02:02:09.941: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 02:02:09.941: INFO: Container kube-multus ready: true, restart count 1 Oct 23 02:02:09.941: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 02:02:09.941: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 02:02:09.941: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 02:02:09.941: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 02:02:09.941: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 02:02:09.941: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 02:02:09.941: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 02:02:09.941: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 02:02:09.941: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 02:02:09.941: INFO: Container collectd ready: true, restart count 0 Oct 23 02:02:09.941: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 02:02:09.941: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 02:02:09.941: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 02:02:09.941: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 02:02:09.941: INFO: Container node-exporter ready: true, restart count 0 Oct 23 02:02:09.941: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 02:02:09.941: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-90669d8b-17d8-413c-b9ad-dcf2edfa742a 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-90669d8b-17d8-413c-b9ad-dcf2edfa742a off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-90669d8b-17d8-413c-b9ad-dcf2edfa742a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 02:02:18.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3851" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.150 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":17,"completed":7,"skipped":1220,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 02:02:18.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Oct 23 02:02:18.060: INFO: Waiting up to 1m0s for all nodes to be ready Oct 23 02:03:18.113: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 02:03:18.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Oct 23 02:03:22.172: INFO: found a healthy node: node2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 02:03:34.228: INFO: pods created so far: [1 1 1] Oct 23 02:03:34.228: INFO: length of pods created so far: 3 Oct 23 02:03:50.245: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 02:03:57.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-8042" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 02:03:57.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-4138" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:99.298 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":17,"completed":8,"skipped":1616,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 02:03:57.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 02:03:57.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7548" for this suite. STEP: Destroying namespace "nspatchtest-30d61a5b-53a0-4bfd-908d-5918b6a24ac9-4536" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":17,"completed":9,"skipped":1621,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 02:03:57.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 02:03:57.427: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 02:03:57.437: INFO: Waiting for terminating namespaces to be deleted... Oct 23 02:03:57.439: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 02:03:57.449: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 02:03:57.449: INFO: Container discover ready: false, restart count 0 Oct 23 02:03:57.449: INFO: Container init ready: false, restart count 0 Oct 23 02:03:57.449: INFO: Container install ready: false, restart count 0 Oct 23 02:03:57.449: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 02:03:57.449: INFO: Container nodereport ready: true, restart count 0 Oct 23 02:03:57.449: INFO: Container reconcile ready: true, restart count 0 Oct 23 02:03:57.449: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 02:03:57.449: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 02:03:57.449: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 02:03:57.449: INFO: Container kube-multus ready: true, restart count 1 Oct 23 02:03:57.449: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 02:03:57.449: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 02:03:57.449: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 02:03:57.449: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 02:03:57.449: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 02:03:57.449: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 02:03:57.450: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 02:03:57.450: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 02:03:57.450: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 02:03:57.450: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 02:03:57.450: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 02:03:57.450: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 02:03:57.450: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 02:03:57.450: INFO: Container collectd ready: true, restart count 0 Oct 23 02:03:57.450: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 02:03:57.450: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 02:03:57.450: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 02:03:57.450: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 02:03:57.450: INFO: Container node-exporter ready: true, restart count 0 Oct 23 02:03:57.450: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 02:03:57.450: INFO: Container config-reloader ready: true, restart count 0 Oct 23 02:03:57.450: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 02:03:57.450: INFO: Container grafana ready: true, restart count 0 Oct 23 02:03:57.450: INFO: Container prometheus ready: true, restart count 1 Oct 23 02:03:57.450: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 02:03:57.450: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 02:03:57.450: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 02:03:57.450: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 02:03:57.458: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 02:03:57.458: INFO: Container discover ready: false, restart count 0 Oct 23 02:03:57.458: INFO: Container init ready: false, restart count 0 Oct 23 02:03:57.458: INFO: Container install ready: false, restart count 0 Oct 23 02:03:57.458: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 02:03:57.459: INFO: Container nodereport ready: true, restart count 1 Oct 23 02:03:57.459: INFO: Container reconcile ready: true, restart count 0 Oct 23 02:03:57.459: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 02:03:57.459: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 02:03:57.459: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 02:03:57.459: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 02:03:57.459: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 02:03:57.459: INFO: Container kube-multus ready: true, restart count 1 Oct 23 02:03:57.459: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 02:03:57.459: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 02:03:57.459: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 02:03:57.459: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 02:03:57.459: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 02:03:57.459: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 02:03:57.459: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 02:03:57.459: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 02:03:57.459: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 02:03:57.459: INFO: Container collectd ready: true, restart count 0 Oct 23 02:03:57.459: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 02:03:57.459: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 02:03:57.459: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 02:03:57.459: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 02:03:57.459: INFO: Container node-exporter ready: true, restart count 0 Oct 23 02:03:57.459: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 02:03:57.459: INFO: Container tas-extender ready: true, restart count 0 Oct 23 02:03:57.459: INFO: pod4 from sched-preemption-path-8042 started at 2021-10-23 02:03:49 +0000 UTC (1 container statuses recorded) Oct 23 02:03:57.459: INFO: Container pod4 ready: true, restart count 0 Oct 23 02:03:57.459: INFO: rs-pod3-9zrbd from sched-preemption-path-8042 started at 2021-10-23 02:03:30 +0000 UTC (1 container statuses recorded) Oct 23 02:03:57.459: INFO: Container pod3 ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: verifying the node has the label node node1 STEP: verifying the node has the label node node2 Oct 23 02:04:03.572: INFO: Pod cmk-kn29k requesting resource cpu=0m on Node node2 Oct 23 02:04:03.572: INFO: Pod cmk-t9r2t requesting resource cpu=0m on Node node1 Oct 23 02:04:03.572: INFO: Pod cmk-webhook-6c9d5f8578-pkwhc requesting resource cpu=0m on Node node2 Oct 23 02:04:03.572: INFO: Pod kube-flannel-2cdvd requesting resource cpu=150m on Node node1 Oct 23 02:04:03.572: INFO: Pod kube-flannel-xx6ls requesting resource cpu=150m on Node node2 Oct 23 02:04:03.572: INFO: Pod kube-multus-ds-amd64-fww5b requesting resource cpu=100m on Node node2 Oct 23 02:04:03.572: INFO: Pod kube-multus-ds-amd64-l97s4 requesting resource cpu=100m on Node node1 Oct 23 02:04:03.572: INFO: Pod kube-proxy-5h2bl requesting resource cpu=0m on Node node2 Oct 23 02:04:03.572: INFO: Pod kube-proxy-m9z8s requesting resource cpu=0m on Node node1 Oct 23 02:04:03.572: INFO: Pod kubernetes-dashboard-785dcbb76d-kc4kh requesting resource cpu=50m on Node node1 Oct 23 02:04:03.572: INFO: Pod kubernetes-metrics-scraper-5558854cb-dfn2n requesting resource cpu=0m on Node node1 Oct 23 02:04:03.572: INFO: Pod nginx-proxy-node1 requesting resource cpu=25m on Node node1 Oct 23 02:04:03.572: INFO: Pod nginx-proxy-node2 requesting resource cpu=25m on Node node2 Oct 23 02:04:03.572: INFO: Pod node-feature-discovery-worker-2pvq5 requesting resource cpu=0m on Node node1 Oct 23 02:04:03.572: INFO: Pod node-feature-discovery-worker-8k8m5 requesting resource cpu=0m on Node node2 Oct 23 02:04:03.572: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd requesting resource cpu=0m on Node node1 Oct 23 02:04:03.572: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq requesting resource cpu=0m on Node node2 Oct 23 02:04:03.572: INFO: Pod collectd-n9sbv requesting resource cpu=0m on Node node1 Oct 23 02:04:03.572: INFO: Pod collectd-xhdgw requesting resource cpu=0m on Node node2 Oct 23 02:04:03.572: INFO: Pod node-exporter-fjc79 requesting resource cpu=112m on Node node2 Oct 23 02:04:03.572: INFO: Pod node-exporter-v656r requesting resource cpu=112m on Node node1 Oct 23 02:04:03.572: INFO: Pod prometheus-k8s-0 requesting resource cpu=200m on Node node1 Oct 23 02:04:03.572: INFO: Pod prometheus-operator-585ccfb458-hwjk2 requesting resource cpu=100m on Node node1 Oct 23 02:04:03.572: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-gltgg requesting resource cpu=0m on Node node2 Oct 23 02:04:03.572: INFO: Pod pod4 requesting resource cpu=0m on Node node2 Oct 23 02:04:03.572: INFO: Pod rs-pod3-9zrbd requesting resource cpu=0m on Node node2 STEP: Starting Pods to consume most of the cluster CPU. Oct 23 02:04:03.572: INFO: Creating a pod which consumes cpu=53384m on Node node1 Oct 23 02:04:03.583: INFO: Creating a pod which consumes cpu=53629m on Node node2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-141b0e6e-6273-4a38-9acd-51d8898ad03d.16b0869602bbd1bd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8488/filler-pod-141b0e6e-6273-4a38-9acd-51d8898ad03d to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-141b0e6e-6273-4a38-9acd-51d8898ad03d.16b0869665f76647], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-141b0e6e-6273-4a38-9acd-51d8898ad03d.16b086967d9a3246], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 396.535951ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-141b0e6e-6273-4a38-9acd-51d8898ad03d.16b08696848ddf00], Reason = [Created], Message = [Created container filler-pod-141b0e6e-6273-4a38-9acd-51d8898ad03d] STEP: Considering event: Type = [Normal], Name = [filler-pod-141b0e6e-6273-4a38-9acd-51d8898ad03d.16b086968c03ffda], Reason = [Started], Message = [Started container filler-pod-141b0e6e-6273-4a38-9acd-51d8898ad03d] STEP: Considering event: Type = [Normal], Name = [filler-pod-3f33fc80-23ea-4149-ad84-a84d366f7ef0.16b08696033b0857], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8488/filler-pod-3f33fc80-23ea-4149-ad84-a84d366f7ef0 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-3f33fc80-23ea-4149-ad84-a84d366f7ef0.16b0869660fa598d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-3f33fc80-23ea-4149-ad84-a84d366f7ef0.16b08696747fa093], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 327.49097ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-3f33fc80-23ea-4149-ad84-a84d366f7ef0.16b086967b5ef327], Reason = [Created], Message = [Created container filler-pod-3f33fc80-23ea-4149-ad84-a84d366f7ef0] STEP: Considering event: Type = [Normal], Name = [filler-pod-3f33fc80-23ea-4149-ad84-a84d366f7ef0.16b08696851bf073], Reason = [Started], Message = [Started container filler-pod-3f33fc80-23ea-4149-ad84-a84d366f7ef0] STEP: Considering event: Type = [Warning], Name = [additional-pod.16b08696f32ece0e], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: removing the label node off the node node1 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node node2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 02:04:08.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8488" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:11.262 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":17,"completed":10,"skipped":2464,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 02:04:08.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Oct 23 02:04:08.972: INFO: Pod name wrapped-volume-race-f3342c19-77a1-405f-a7b6-f4636f5cb289: Found 3 pods out of 5 Oct 23 02:04:13.983: INFO: Pod name wrapped-volume-race-f3342c19-77a1-405f-a7b6-f4636f5cb289: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f3342c19-77a1-405f-a7b6-f4636f5cb289 in namespace emptydir-wrapper-9414, will wait for the garbage collector to delete the pods Oct 23 02:04:28.067: INFO: Deleting ReplicationController wrapped-volume-race-f3342c19-77a1-405f-a7b6-f4636f5cb289 took: 5.239014ms Oct 23 02:04:28.168: INFO: Terminating ReplicationController wrapped-volume-race-f3342c19-77a1-405f-a7b6-f4636f5cb289 pods took: 100.813301ms STEP: Creating RC which spawns configmap-volume pods Oct 23 02:04:44.287: INFO: Pod name wrapped-volume-race-537e79ab-54cd-46d5-ab2e-26041b9f7323: Found 0 pods out of 5 Oct 23 02:04:49.296: INFO: Pod name wrapped-volume-race-537e79ab-54cd-46d5-ab2e-26041b9f7323: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-537e79ab-54cd-46d5-ab2e-26041b9f7323 in namespace emptydir-wrapper-9414, will wait for the garbage collector to delete the pods Oct 23 02:05:15.390: INFO: Deleting ReplicationController wrapped-volume-race-537e79ab-54cd-46d5-ab2e-26041b9f7323 took: 8.049023ms Oct 23 02:05:15.490: INFO: Terminating ReplicationController wrapped-volume-race-537e79ab-54cd-46d5-ab2e-26041b9f7323 pods took: 100.280241ms STEP: Creating RC which spawns configmap-volume pods Oct 23 02:05:24.309: INFO: Pod name wrapped-volume-race-031fdcc3-29fa-4c27-b64e-bd837a7123d4: Found 0 pods out of 5 Oct 23 02:05:29.324: INFO: Pod name wrapped-volume-race-031fdcc3-29fa-4c27-b64e-bd837a7123d4: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-031fdcc3-29fa-4c27-b64e-bd837a7123d4 in namespace emptydir-wrapper-9414, will wait for the garbage collector to delete the pods Oct 23 02:05:45.421: INFO: Deleting ReplicationController wrapped-volume-race-031fdcc3-29fa-4c27-b64e-bd837a7123d4 took: 7.910262ms Oct 23 02:05:45.522: INFO: Terminating ReplicationController wrapped-volume-race-031fdcc3-29fa-4c27-b64e-bd837a7123d4 pods took: 100.954005ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 02:05:54.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9414" for this suite. • [SLOW TEST:105.439 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":17,"completed":11,"skipped":2922,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 02:05:54.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 23 02:05:54.164: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 02:05:54.164: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 02:05:54.164: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 02:05:54.166: INFO: Number of nodes with available pods: 0 Oct 23 02:05:54.166: INFO: Node node1 is running more than one daemon pod Oct 23 02:05:55.172: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 02:05:55.172: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 02:05:55.172: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 02:05:55.175: INFO: Number of nodes with available pods: 0 Oct 23 02:05:55.175: INFO: Node node1 is running more than one daemon pod Oct 23 02:05:56.177: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 02:05:56.177: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 02:05:56.177: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 02:05:56.181: INFO: Number of nodes with available pods: 0 Oct 23 02:05:56.181: INFO: Node node1 is running more than one daemon pod Oct 23 02:05:57.173: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 02:05:57.173: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 02:05:57.173: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 02:05:57.176: INFO: Number of nodes with available pods: 1 Oct 23 02:05:57.176: INFO: Node node2 is running more than one daemon pod Oct 23 02:05:58.173: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 02:05:58.173: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 02:05:58.173: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 02:05:58.176: INFO: Number of nodes with available pods: 2 Oct 23 02:05:58.176: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Oct 23 02:05:58.191: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 02:05:58.191: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 02:05:58.191: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 23 02:05:58.193: INFO: Number of nodes with available pods: 2 Oct 23 02:05:58.194: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6765, will wait for the garbage collector to delete the pods Oct 23 02:05:58.256: INFO: Deleting DaemonSet.extensions daemon-set took: 5.435727ms Oct 23 02:05:58.357: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.601196ms Oct 23 02:06:04.260: INFO: Number of nodes with available pods: 0 Oct 23 02:06:04.260: INFO: Number of running nodes: 0, number of available pods: 0 Oct 23 02:06:04.262: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"112052"},"items":null} Oct 23 02:06:04.265: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"112052"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 02:06:04.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6765" for this suite. • [SLOW TEST:10.179 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":17,"completed":12,"skipped":3567,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 02:06:04.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 02:06:10.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-916" for this suite. STEP: Destroying namespace "nsdeletetest-9771" for this suite. Oct 23 02:06:10.396: INFO: Namespace nsdeletetest-9771 was already deleted STEP: Destroying namespace "nsdeletetest-9905" for this suite. • [SLOW TEST:6.105 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":17,"completed":13,"skipped":3687,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 02:06:10.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Oct 23 02:06:10.443: INFO: Waiting up to 1m0s for all nodes to be ready Oct 23 02:07:10.501: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Oct 23 02:07:10.529: INFO: Created pod: pod0-sched-preemption-low-priority Oct 23 02:07:10.549: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 02:07:36.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7476" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:86.218 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":17,"completed":14,"skipped":4108,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 02:07:36.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 02:07:36.668: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Oct 23 02:07:36.677: INFO: Number of nodes with available pods: 0 Oct 23 02:07:36.677: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Oct 23 02:07:36.702: INFO: Number of nodes with available pods: 0 Oct 23 02:07:36.702: INFO: Node node1 is running more than one daemon pod Oct 23 02:07:37.706: INFO: Number of nodes with available pods: 0 Oct 23 02:07:37.706: INFO: Node node1 is running more than one daemon pod Oct 23 02:07:38.707: INFO: Number of nodes with available pods: 0 Oct 23 02:07:38.707: INFO: Node node1 is running more than one daemon pod Oct 23 02:07:39.708: INFO: Number of nodes with available pods: 1 Oct 23 02:07:39.708: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Oct 23 02:07:39.722: INFO: Number of nodes with available pods: 1 Oct 23 02:07:39.723: INFO: Number of running nodes: 0, number of available pods: 1 Oct 23 02:07:40.728: INFO: Number of nodes with available pods: 0 Oct 23 02:07:40.728: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Oct 23 02:07:40.736: INFO: Number of nodes with available pods: 0 Oct 23 02:07:40.736: INFO: Node node1 is running more than one daemon pod Oct 23 02:07:41.740: INFO: Number of nodes with available pods: 0 Oct 23 02:07:41.740: INFO: Node node1 is running more than one daemon pod Oct 23 02:07:42.743: INFO: Number of nodes with available pods: 0 Oct 23 02:07:42.743: INFO: Node node1 is running more than one daemon pod Oct 23 02:07:43.742: INFO: Number of nodes with available pods: 0 Oct 23 02:07:43.742: INFO: Node node1 is running more than one daemon pod Oct 23 02:07:44.742: INFO: Number of nodes with available pods: 0 Oct 23 02:07:44.742: INFO: Node node1 is running more than one daemon pod Oct 23 02:07:45.742: INFO: Number of nodes with available pods: 0 Oct 23 02:07:45.742: INFO: Node node1 is running more than one daemon pod Oct 23 02:07:46.740: INFO: Number of nodes with available pods: 0 Oct 23 02:07:46.740: INFO: Node node1 is running more than one daemon pod Oct 23 02:07:47.743: INFO: Number of nodes with available pods: 0 Oct 23 02:07:47.743: INFO: Node node1 is running more than one daemon pod Oct 23 02:07:48.743: INFO: Number of nodes with available pods: 0 Oct 23 02:07:48.743: INFO: Node node1 is running more than one daemon pod Oct 23 02:07:49.740: INFO: Number of nodes with available pods: 0 Oct 23 02:07:49.740: INFO: Node node1 is running more than one daemon pod Oct 23 02:07:50.742: INFO: Number of nodes with available pods: 0 Oct 23 02:07:50.742: INFO: Node node1 is running more than one daemon pod Oct 23 02:07:51.740: INFO: Number of nodes with available pods: 0 Oct 23 02:07:51.740: INFO: Node node1 is running more than one daemon pod Oct 23 02:07:52.742: INFO: Number of nodes with available pods: 0 Oct 23 02:07:52.742: INFO: Node node1 is running more than one daemon pod Oct 23 02:07:53.742: INFO: Number of nodes with available pods: 0 Oct 23 02:07:53.742: INFO: Node node1 is running more than one daemon pod Oct 23 02:07:54.742: INFO: Number of nodes with available pods: 0 Oct 23 02:07:54.742: INFO: Node node1 is running more than one daemon pod Oct 23 02:07:55.741: INFO: Number of nodes with available pods: 0 Oct 23 02:07:55.741: INFO: Node node1 is running more than one daemon pod Oct 23 02:07:56.739: INFO: Number of nodes with available pods: 0 Oct 23 02:07:56.739: INFO: Node node1 is running more than one daemon pod Oct 23 02:07:57.742: INFO: Number of nodes with available pods: 1 Oct 23 02:07:57.742: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7134, will wait for the garbage collector to delete the pods Oct 23 02:07:57.806: INFO: Deleting DaemonSet.extensions daemon-set took: 4.559304ms Oct 23 02:07:57.907: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.140785ms Oct 23 02:08:03.910: INFO: Number of nodes with available pods: 0 Oct 23 02:08:03.910: INFO: Number of running nodes: 0, number of available pods: 0 Oct 23 02:08:03.912: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"112568"},"items":null} Oct 23 02:08:03.914: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"112568"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 02:08:03.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7134" for this suite. • [SLOW TEST:27.316 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":17,"completed":15,"skipped":4174,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 02:08:03.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Oct 23 02:08:03.976: INFO: Waiting up to 1m0s for all nodes to be ready Oct 23 02:09:04.028: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 02:09:04.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 23 02:09:04.070: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Oct 23 02:09:04.072: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 02:09:04.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-1480" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 02:09:04.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-4385" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.210 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":17,"completed":16,"skipped":4406,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 02:09:04.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 02:09:04.177: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 02:09:04.186: INFO: Waiting for terminating namespaces to be deleted... Oct 23 02:09:04.188: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 02:09:04.198: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 02:09:04.198: INFO: Container discover ready: false, restart count 0 Oct 23 02:09:04.198: INFO: Container init ready: false, restart count 0 Oct 23 02:09:04.198: INFO: Container install ready: false, restart count 0 Oct 23 02:09:04.198: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 02:09:04.198: INFO: Container nodereport ready: true, restart count 0 Oct 23 02:09:04.198: INFO: Container reconcile ready: true, restart count 0 Oct 23 02:09:04.198: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 02:09:04.198: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 02:09:04.198: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 02:09:04.198: INFO: Container kube-multus ready: true, restart count 1 Oct 23 02:09:04.198: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 02:09:04.198: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 02:09:04.198: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 02:09:04.198: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 02:09:04.198: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 02:09:04.198: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 02:09:04.198: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 02:09:04.198: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 02:09:04.198: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 02:09:04.198: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 02:09:04.198: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 02:09:04.198: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 02:09:04.198: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 02:09:04.198: INFO: Container collectd ready: true, restart count 0 Oct 23 02:09:04.198: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 02:09:04.198: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 02:09:04.198: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 02:09:04.198: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 02:09:04.198: INFO: Container node-exporter ready: true, restart count 0 Oct 23 02:09:04.198: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 02:09:04.198: INFO: Container config-reloader ready: true, restart count 0 Oct 23 02:09:04.198: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 02:09:04.198: INFO: Container grafana ready: true, restart count 0 Oct 23 02:09:04.198: INFO: Container prometheus ready: true, restart count 1 Oct 23 02:09:04.198: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 02:09:04.198: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 02:09:04.198: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 02:09:04.198: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 02:09:04.207: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 02:09:04.207: INFO: Container discover ready: false, restart count 0 Oct 23 02:09:04.207: INFO: Container init ready: false, restart count 0 Oct 23 02:09:04.207: INFO: Container install ready: false, restart count 0 Oct 23 02:09:04.207: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 02:09:04.207: INFO: Container nodereport ready: true, restart count 1 Oct 23 02:09:04.207: INFO: Container reconcile ready: true, restart count 0 Oct 23 02:09:04.207: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 02:09:04.207: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 02:09:04.207: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 02:09:04.207: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 02:09:04.207: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 02:09:04.207: INFO: Container kube-multus ready: true, restart count 1 Oct 23 02:09:04.207: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 02:09:04.207: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 02:09:04.207: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 02:09:04.207: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 02:09:04.207: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 02:09:04.207: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 02:09:04.207: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 02:09:04.207: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 02:09:04.207: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 02:09:04.207: INFO: Container collectd ready: true, restart count 0 Oct 23 02:09:04.207: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 02:09:04.207: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 02:09:04.207: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 02:09:04.207: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 02:09:04.207: INFO: Container node-exporter ready: true, restart count 0 Oct 23 02:09:04.207: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 02:09:04.207: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16b086dc02eaed03], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 02:09:05.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6526" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":17,"completed":17,"skipped":4459,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 23 02:09:05.265: INFO: Running AfterSuite actions on all nodes Oct 23 02:09:05.265: INFO: Running AfterSuite actions on node 1 Oct 23 02:09:05.265: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":17,"skipped":5753,"failed":0} Ran 17 of 5770 Specs in 904.011 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5753 Skipped PASS Ginkgo ran 1 suite in 15m5.443461137s Test Suite Passed