I0617 22:15:00.310013 24 e2e.go:129] Starting e2e run "b8269130-5868-4935-8c05-37f40663eb31" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1655504099 - Will randomize all specs Will run 17 of 5773 specs Jun 17 22:15:00.370: INFO: >>> kubeConfig: /root/.kube/config Jun 17 22:15:00.375: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 17 22:15:00.403: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 17 22:15:00.472: INFO: The status of Pod cmk-init-discover-node1-bvmrv is Succeeded, skipping waiting Jun 17 22:15:00.472: INFO: The status of Pod cmk-init-discover-node2-z2vgz is Succeeded, skipping waiting Jun 17 22:15:00.473: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 17 22:15:00.473: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 17 22:15:00.473: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 17 22:15:00.491: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Jun 17 22:15:00.491: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Jun 17 22:15:00.491: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Jun 17 22:15:00.491: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Jun 17 22:15:00.491: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Jun 17 22:15:00.491: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Jun 17 22:15:00.491: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Jun 17 22:15:00.491: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 17 22:15:00.491: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Jun 17 22:15:00.491: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Jun 17 22:15:00.491: INFO: e2e test version: v1.21.9 Jun 17 22:15:00.492: INFO: kube-apiserver version: v1.21.1 Jun 17 22:15:00.492: INFO: >>> kubeConfig: /root/.kube/config Jun 17 22:15:00.498: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:15:00.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces W0617 22:15:00.538704 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 22:15:00.538: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 22:15:00.542: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:15:31.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2669" for this suite. STEP: Destroying namespace "nsdeletetest-1033" for this suite. Jun 17 22:15:31.647: INFO: Namespace nsdeletetest-1033 was already deleted STEP: Destroying namespace "nsdeletetest-3148" for this suite. • [SLOW TEST:31.149 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":17,"completed":1,"skipped":516,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:15:31.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:15:31.693: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 17 22:15:31.700: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:31.700: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:31.700: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:31.704: INFO: Number of nodes with available pods: 0 Jun 17 22:15:31.704: INFO: Node node1 is running more than one daemon pod Jun 17 22:15:32.709: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:32.709: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:32.709: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:32.711: INFO: Number of nodes with available pods: 0 Jun 17 22:15:32.711: INFO: Node node1 is running more than one daemon pod Jun 17 22:15:33.710: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:33.710: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:33.710: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:33.714: INFO: Number of nodes with available pods: 0 Jun 17 22:15:33.714: INFO: Node node1 is running more than one daemon pod Jun 17 22:15:34.710: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:34.710: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:34.710: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:34.713: INFO: Number of nodes with available pods: 1 Jun 17 22:15:34.713: INFO: Node node1 is running more than one daemon pod Jun 17 22:15:35.710: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:35.710: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:35.710: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:35.714: INFO: Number of nodes with available pods: 2 Jun 17 22:15:35.714: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 17 22:15:35.741: INFO: Wrong image for pod: daemon-set-m48b2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 17 22:15:35.741: INFO: Wrong image for pod: daemon-set-nmn7d. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 17 22:15:35.745: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:35.745: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:35.745: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:36.748: INFO: Wrong image for pod: daemon-set-m48b2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 17 22:15:36.753: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:36.753: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:36.753: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:37.751: INFO: Wrong image for pod: daemon-set-m48b2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 17 22:15:37.756: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:37.756: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:37.756: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:38.752: INFO: Wrong image for pod: daemon-set-m48b2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 17 22:15:38.756: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:38.756: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:38.756: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:39.750: INFO: Wrong image for pod: daemon-set-m48b2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 17 22:15:39.755: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:39.755: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:39.755: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:40.750: INFO: Wrong image for pod: daemon-set-m48b2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 17 22:15:40.754: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:40.755: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:40.755: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:41.751: INFO: Wrong image for pod: daemon-set-m48b2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 17 22:15:41.755: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:41.755: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:41.755: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:42.751: INFO: Wrong image for pod: daemon-set-m48b2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 17 22:15:42.756: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:42.756: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:42.756: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:43.749: INFO: Wrong image for pod: daemon-set-m48b2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 17 22:15:43.754: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:43.754: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:43.754: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:44.750: INFO: Wrong image for pod: daemon-set-m48b2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 17 22:15:44.755: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:44.755: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:44.755: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:45.750: INFO: Wrong image for pod: daemon-set-m48b2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 17 22:15:45.754: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:45.755: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:45.755: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:46.752: INFO: Wrong image for pod: daemon-set-m48b2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 17 22:15:46.757: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:46.757: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:46.757: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:47.752: INFO: Wrong image for pod: daemon-set-m48b2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 17 22:15:47.757: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:47.757: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:47.757: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:48.751: INFO: Wrong image for pod: daemon-set-m48b2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 17 22:15:48.756: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:48.756: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:48.756: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:49.751: INFO: Pod daemon-set-gfjkr is not available Jun 17 22:15:49.751: INFO: Wrong image for pod: daemon-set-m48b2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 17 22:15:49.755: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:49.755: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:49.755: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:50.750: INFO: Pod daemon-set-gfjkr is not available Jun 17 22:15:50.750: INFO: Wrong image for pod: daemon-set-m48b2. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 17 22:15:50.754: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:50.754: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:50.754: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:51.754: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:51.754: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:51.754: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:52.754: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:52.754: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:52.754: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:53.755: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:53.755: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:53.755: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:54.753: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:54.753: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:54.753: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:55.754: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:55.754: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:55.754: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:56.755: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:56.755: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:56.755: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:57.755: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:57.755: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:57.755: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:58.753: INFO: Pod daemon-set-mxc48 is not available Jun 17 22:15:58.758: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:58.758: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:58.758: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 17 22:15:58.763: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:58.763: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:58.763: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:58.766: INFO: Number of nodes with available pods: 1 Jun 17 22:15:58.766: INFO: Node node2 is running more than one daemon pod Jun 17 22:15:59.772: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:59.772: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:59.772: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:15:59.775: INFO: Number of nodes with available pods: 1 Jun 17 22:15:59.775: INFO: Node node2 is running more than one daemon pod Jun 17 22:16:00.772: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:00.772: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:00.772: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:00.774: INFO: Number of nodes with available pods: 2 Jun 17 22:16:00.774: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1904, will wait for the garbage collector to delete the pods Jun 17 22:16:00.846: INFO: Deleting DaemonSet.extensions daemon-set took: 4.663842ms Jun 17 22:16:00.947: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.235139ms Jun 17 22:16:09.350: INFO: Number of nodes with available pods: 0 Jun 17 22:16:09.350: INFO: Number of running nodes: 0, number of available pods: 0 Jun 17 22:16:09.352: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"52000"},"items":null} Jun 17 22:16:09.355: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"52000"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:16:09.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1904" for this suite. • [SLOW TEST:37.718 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":17,"completed":2,"skipped":729,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:16:09.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:16:15.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8770" for this suite. STEP: Destroying namespace "nsdeletetest-3668" for this suite. Jun 17 22:16:15.470: INFO: Namespace nsdeletetest-3668 was already deleted STEP: Destroying namespace "nsdeletetest-7205" for this suite. • [SLOW TEST:6.091 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":17,"completed":3,"skipped":1203,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:16:15.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 17 22:16:15.531: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:15.531: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:15.531: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:15.533: INFO: Number of nodes with available pods: 0 Jun 17 22:16:15.533: INFO: Node node1 is running more than one daemon pod Jun 17 22:16:16.540: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:16.540: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:16.540: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:16.544: INFO: Number of nodes with available pods: 0 Jun 17 22:16:16.544: INFO: Node node1 is running more than one daemon pod Jun 17 22:16:17.540: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:17.540: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:17.540: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:17.545: INFO: Number of nodes with available pods: 0 Jun 17 22:16:17.545: INFO: Node node1 is running more than one daemon pod Jun 17 22:16:18.549: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:18.549: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:18.549: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:18.553: INFO: Number of nodes with available pods: 1 Jun 17 22:16:18.553: INFO: Node node1 is running more than one daemon pod Jun 17 22:16:19.541: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:19.541: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:19.541: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:19.544: INFO: Number of nodes with available pods: 2 Jun 17 22:16:19.544: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 17 22:16:19.562: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:19.562: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:19.562: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:19.564: INFO: Number of nodes with available pods: 1 Jun 17 22:16:19.564: INFO: Node node2 is running more than one daemon pod Jun 17 22:16:20.569: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:20.569: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:20.569: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:20.576: INFO: Number of nodes with available pods: 1 Jun 17 22:16:20.576: INFO: Node node2 is running more than one daemon pod Jun 17 22:16:21.570: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:21.570: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:21.570: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:21.572: INFO: Number of nodes with available pods: 1 Jun 17 22:16:21.572: INFO: Node node2 is running more than one daemon pod Jun 17 22:16:22.571: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:22.571: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:22.571: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:22.573: INFO: Number of nodes with available pods: 1 Jun 17 22:16:22.573: INFO: Node node2 is running more than one daemon pod Jun 17 22:16:23.572: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:23.573: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:23.573: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:23.575: INFO: Number of nodes with available pods: 1 Jun 17 22:16:23.575: INFO: Node node2 is running more than one daemon pod Jun 17 22:16:24.571: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:24.571: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:24.571: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:24.574: INFO: Number of nodes with available pods: 1 Jun 17 22:16:24.574: INFO: Node node2 is running more than one daemon pod Jun 17 22:16:25.570: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:25.570: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:25.570: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:25.573: INFO: Number of nodes with available pods: 1 Jun 17 22:16:25.573: INFO: Node node2 is running more than one daemon pod Jun 17 22:16:26.571: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:26.572: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:26.572: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:26.574: INFO: Number of nodes with available pods: 1 Jun 17 22:16:26.574: INFO: Node node2 is running more than one daemon pod Jun 17 22:16:27.570: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:27.570: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:27.570: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:27.574: INFO: Number of nodes with available pods: 1 Jun 17 22:16:27.574: INFO: Node node2 is running more than one daemon pod Jun 17 22:16:28.573: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:28.573: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:28.573: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:28.576: INFO: Number of nodes with available pods: 1 Jun 17 22:16:28.576: INFO: Node node2 is running more than one daemon pod Jun 17 22:16:29.571: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:29.571: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:29.571: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:29.574: INFO: Number of nodes with available pods: 1 Jun 17 22:16:29.574: INFO: Node node2 is running more than one daemon pod Jun 17 22:16:30.570: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:30.570: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:30.570: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:30.573: INFO: Number of nodes with available pods: 1 Jun 17 22:16:30.573: INFO: Node node2 is running more than one daemon pod Jun 17 22:16:31.573: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:31.573: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:31.573: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:31.576: INFO: Number of nodes with available pods: 2 Jun 17 22:16:31.576: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1873, will wait for the garbage collector to delete the pods Jun 17 22:16:31.639: INFO: Deleting DaemonSet.extensions daemon-set took: 6.613247ms Jun 17 22:16:31.739: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.933488ms Jun 17 22:16:39.444: INFO: Number of nodes with available pods: 0 Jun 17 22:16:39.444: INFO: Number of running nodes: 0, number of available pods: 0 Jun 17 22:16:39.447: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"52221"},"items":null} Jun 17 22:16:39.451: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"52221"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:16:39.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1873" for this suite. • [SLOW TEST:23.986 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":17,"completed":4,"skipped":1889,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:16:39.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 17 22:16:39.527: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:39.527: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:39.527: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:39.532: INFO: Number of nodes with available pods: 0 Jun 17 22:16:39.532: INFO: Node node1 is running more than one daemon pod Jun 17 22:16:40.538: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:40.538: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:40.538: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:40.541: INFO: Number of nodes with available pods: 0 Jun 17 22:16:40.541: INFO: Node node1 is running more than one daemon pod Jun 17 22:16:41.537: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:41.537: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:41.537: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:41.540: INFO: Number of nodes with available pods: 1 Jun 17 22:16:41.540: INFO: Node node1 is running more than one daemon pod Jun 17 22:16:42.541: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:42.541: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:42.541: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:42.544: INFO: Number of nodes with available pods: 2 Jun 17 22:16:42.544: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 17 22:16:42.563: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:42.563: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:42.563: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:42.565: INFO: Number of nodes with available pods: 1 Jun 17 22:16:42.565: INFO: Node node2 is running more than one daemon pod Jun 17 22:16:43.572: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:43.572: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:43.572: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:43.575: INFO: Number of nodes with available pods: 1 Jun 17 22:16:43.575: INFO: Node node2 is running more than one daemon pod Jun 17 22:16:44.570: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:44.570: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:44.570: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:44.573: INFO: Number of nodes with available pods: 1 Jun 17 22:16:44.573: INFO: Node node2 is running more than one daemon pod Jun 17 22:16:45.571: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:45.571: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:45.571: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:45.574: INFO: Number of nodes with available pods: 1 Jun 17 22:16:45.574: INFO: Node node2 is running more than one daemon pod Jun 17 22:16:46.571: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:46.571: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:46.571: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:16:46.574: INFO: Number of nodes with available pods: 2 Jun 17 22:16:46.574: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7588, will wait for the garbage collector to delete the pods Jun 17 22:16:46.638: INFO: Deleting DaemonSet.extensions daemon-set took: 5.962743ms Jun 17 22:16:46.739: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.910194ms Jun 17 22:16:59.342: INFO: Number of nodes with available pods: 0 Jun 17 22:16:59.342: INFO: Number of running nodes: 0, number of available pods: 0 Jun 17 22:16:59.344: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"52363"},"items":null} Jun 17 22:16:59.346: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"52363"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:16:59.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7588" for this suite. • [SLOW TEST:19.895 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":17,"completed":5,"skipped":1945,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:16:59.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:16:59.410: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 17 22:16:59.421: INFO: Number of nodes with available pods: 0 Jun 17 22:16:59.421: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 17 22:16:59.445: INFO: Number of nodes with available pods: 0 Jun 17 22:16:59.445: INFO: Node node2 is running more than one daemon pod Jun 17 22:17:00.450: INFO: Number of nodes with available pods: 0 Jun 17 22:17:00.450: INFO: Node node2 is running more than one daemon pod Jun 17 22:17:01.449: INFO: Number of nodes with available pods: 0 Jun 17 22:17:01.449: INFO: Node node2 is running more than one daemon pod Jun 17 22:17:02.449: INFO: Number of nodes with available pods: 1 Jun 17 22:17:02.449: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 17 22:17:02.465: INFO: Number of nodes with available pods: 1 Jun 17 22:17:02.465: INFO: Number of running nodes: 0, number of available pods: 1 Jun 17 22:17:03.471: INFO: Number of nodes with available pods: 0 Jun 17 22:17:03.471: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 17 22:17:03.481: INFO: Number of nodes with available pods: 0 Jun 17 22:17:03.481: INFO: Node node2 is running more than one daemon pod Jun 17 22:17:04.485: INFO: Number of nodes with available pods: 0 Jun 17 22:17:04.485: INFO: Node node2 is running more than one daemon pod Jun 17 22:17:05.484: INFO: Number of nodes with available pods: 0 Jun 17 22:17:05.484: INFO: Node node2 is running more than one daemon pod Jun 17 22:17:06.486: INFO: Number of nodes with available pods: 0 Jun 17 22:17:06.486: INFO: Node node2 is running more than one daemon pod Jun 17 22:17:07.487: INFO: Number of nodes with available pods: 0 Jun 17 22:17:07.487: INFO: Node node2 is running more than one daemon pod Jun 17 22:17:08.486: INFO: Number of nodes with available pods: 0 Jun 17 22:17:08.486: INFO: Node node2 is running more than one daemon pod Jun 17 22:17:09.485: INFO: Number of nodes with available pods: 1 Jun 17 22:17:09.485: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1447, will wait for the garbage collector to delete the pods Jun 17 22:17:09.551: INFO: Deleting DaemonSet.extensions daemon-set took: 6.829698ms Jun 17 22:17:09.651: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.352732ms Jun 17 22:17:12.655: INFO: Number of nodes with available pods: 0 Jun 17 22:17:12.655: INFO: Number of running nodes: 0, number of available pods: 0 Jun 17 22:17:12.657: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"52479"},"items":null} Jun 17 22:17:12.659: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"52479"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:17:12.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1447" for this suite. • [SLOW TEST:13.323 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":17,"completed":6,"skipped":2002,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:17:12.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:17:12.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8029" for this suite. STEP: Destroying namespace "nspatchtest-625be42f-de55-4639-94ac-c58d5b810e49-2919" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":17,"completed":7,"skipped":3064,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:17:12.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 17 22:17:12.819: INFO: Waiting up to 1m0s for all nodes to be ready Jun 17 22:18:12.877: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Jun 17 22:18:12.903: INFO: Created pod: pod0-sched-preemption-low-priority Jun 17 22:18:12.923: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:18:32.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1013" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:80.227 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":17,"completed":8,"skipped":3079,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:18:33.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 17 22:18:33.053: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 17 22:18:33.060: INFO: Waiting for terminating namespaces to be deleted... Jun 17 22:18:33.063: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 17 22:18:33.072: INFO: cmk-init-discover-node1-bvmrv from kube-system started at 2022-06-17 20:13:02 +0000 UTC (3 container statuses recorded) Jun 17 22:18:33.072: INFO: Container discover ready: false, restart count 0 Jun 17 22:18:33.072: INFO: Container init ready: false, restart count 0 Jun 17 22:18:33.072: INFO: Container install ready: false, restart count 0 Jun 17 22:18:33.072: INFO: cmk-webhook-6c9d5f8578-qcmrd from kube-system started at 2022-06-17 20:13:52 +0000 UTC (1 container statuses recorded) Jun 17 22:18:33.072: INFO: Container cmk-webhook ready: true, restart count 0 Jun 17 22:18:33.072: INFO: cmk-xh247 from kube-system started at 2022-06-17 20:13:51 +0000 UTC (2 container statuses recorded) Jun 17 22:18:33.072: INFO: Container nodereport ready: true, restart count 0 Jun 17 22:18:33.072: INFO: Container reconcile ready: true, restart count 0 Jun 17 22:18:33.072: INFO: kube-flannel-wqcwq from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 22:18:33.072: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:18:33.072: INFO: kube-multus-ds-amd64-m6vf8 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 22:18:33.072: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:18:33.072: INFO: kube-proxy-t4lqk from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 22:18:33.072: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:18:33.072: INFO: kubernetes-dashboard-785dcbb76d-26kg6 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 22:18:33.072: INFO: Container kubernetes-dashboard ready: true, restart count 2 Jun 17 22:18:33.072: INFO: nginx-proxy-node1 from kube-system started at 2022-06-17 20:00:39 +0000 UTC (1 container statuses recorded) Jun 17 22:18:33.072: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 22:18:33.072: INFO: node-feature-discovery-worker-dgp4b from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 22:18:33.072: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 22:18:33.072: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 22:18:33.072: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 22:18:33.072: INFO: collectd-5src2 from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 22:18:33.072: INFO: Container collectd ready: true, restart count 0 Jun 17 22:18:33.072: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 22:18:33.072: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 22:18:33.072: INFO: node-exporter-8ftgl from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 22:18:33.072: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:18:33.072: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:18:33.072: INFO: prometheus-k8s-0 from monitoring started at 2022-06-17 20:14:56 +0000 UTC (4 container statuses recorded) Jun 17 22:18:33.072: INFO: Container config-reloader ready: true, restart count 0 Jun 17 22:18:33.073: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 17 22:18:33.073: INFO: Container grafana ready: true, restart count 0 Jun 17 22:18:33.073: INFO: Container prometheus ready: true, restart count 1 Jun 17 22:18:33.073: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv from monitoring started at 2022-06-17 20:17:57 +0000 UTC (1 container statuses recorded) Jun 17 22:18:33.073: INFO: Container tas-extender ready: true, restart count 0 Jun 17 22:18:33.073: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 17 22:18:33.081: INFO: cmk-5gtjq from kube-system started at 2022-06-17 20:13:52 +0000 UTC (2 container statuses recorded) Jun 17 22:18:33.081: INFO: Container nodereport ready: true, restart count 0 Jun 17 22:18:33.081: INFO: Container reconcile ready: true, restart count 0 Jun 17 22:18:33.081: INFO: cmk-init-discover-node2-z2vgz from kube-system started at 2022-06-17 20:13:25 +0000 UTC (3 container statuses recorded) Jun 17 22:18:33.081: INFO: Container discover ready: false, restart count 0 Jun 17 22:18:33.081: INFO: Container init ready: false, restart count 0 Jun 17 22:18:33.081: INFO: Container install ready: false, restart count 0 Jun 17 22:18:33.081: INFO: kube-flannel-plbl8 from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 22:18:33.081: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:18:33.081: INFO: kube-multus-ds-amd64-hblk4 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 22:18:33.081: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:18:33.081: INFO: kube-proxy-pvtj6 from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 22:18:33.081: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:18:33.081: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 22:18:33.081: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 17 22:18:33.081: INFO: nginx-proxy-node2 from kube-system started at 2022-06-17 20:00:37 +0000 UTC (1 container statuses recorded) Jun 17 22:18:33.081: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 22:18:33.081: INFO: node-feature-discovery-worker-82r46 from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 22:18:33.081: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 22:18:33.081: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 22:18:33.081: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 22:18:33.081: INFO: collectd-6bcqz from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 22:18:33.081: INFO: Container collectd ready: true, restart count 0 Jun 17 22:18:33.081: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 22:18:33.081: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 22:18:33.081: INFO: node-exporter-xgz6d from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 22:18:33.081: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:18:33.081: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:18:33.081: INFO: pod1-sched-preemption-medium-priority from sched-preemption-1013 started at 2022-06-17 22:18:16 +0000 UTC (1 container statuses recorded) Jun 17 22:18:33.081: INFO: Container pod1-sched-preemption-medium-priority ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16f98866cb3a3cc6], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:18:34.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1715" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":17,"completed":9,"skipped":3528,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:18:34.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 17 22:18:34.163: INFO: Waiting up to 1m0s for all nodes to be ready Jun 17 22:19:34.218: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:19:34.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:19:34.258: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Jun 17 22:19:34.261: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:19:34.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-8638" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:19:34.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2247" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.217 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":17,"completed":10,"skipped":3624,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:19:34.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 17 22:19:34.367: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 17 22:19:34.375: INFO: Waiting for terminating namespaces to be deleted... Jun 17 22:19:34.377: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 17 22:19:34.387: INFO: cmk-init-discover-node1-bvmrv from kube-system started at 2022-06-17 20:13:02 +0000 UTC (3 container statuses recorded) Jun 17 22:19:34.387: INFO: Container discover ready: false, restart count 0 Jun 17 22:19:34.387: INFO: Container init ready: false, restart count 0 Jun 17 22:19:34.387: INFO: Container install ready: false, restart count 0 Jun 17 22:19:34.387: INFO: cmk-webhook-6c9d5f8578-qcmrd from kube-system started at 2022-06-17 20:13:52 +0000 UTC (1 container statuses recorded) Jun 17 22:19:34.387: INFO: Container cmk-webhook ready: true, restart count 0 Jun 17 22:19:34.387: INFO: cmk-xh247 from kube-system started at 2022-06-17 20:13:51 +0000 UTC (2 container statuses recorded) Jun 17 22:19:34.387: INFO: Container nodereport ready: true, restart count 0 Jun 17 22:19:34.387: INFO: Container reconcile ready: true, restart count 0 Jun 17 22:19:34.387: INFO: kube-flannel-wqcwq from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 22:19:34.387: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:19:34.387: INFO: kube-multus-ds-amd64-m6vf8 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 22:19:34.387: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:19:34.387: INFO: kube-proxy-t4lqk from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 22:19:34.387: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:19:34.387: INFO: kubernetes-dashboard-785dcbb76d-26kg6 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 22:19:34.387: INFO: Container kubernetes-dashboard ready: true, restart count 2 Jun 17 22:19:34.387: INFO: nginx-proxy-node1 from kube-system started at 2022-06-17 20:00:39 +0000 UTC (1 container statuses recorded) Jun 17 22:19:34.387: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 22:19:34.387: INFO: node-feature-discovery-worker-dgp4b from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 22:19:34.387: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 22:19:34.387: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 22:19:34.387: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 22:19:34.387: INFO: collectd-5src2 from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 22:19:34.387: INFO: Container collectd ready: true, restart count 0 Jun 17 22:19:34.387: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 22:19:34.387: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 22:19:34.387: INFO: node-exporter-8ftgl from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 22:19:34.387: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:19:34.387: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:19:34.387: INFO: prometheus-k8s-0 from monitoring started at 2022-06-17 20:14:56 +0000 UTC (4 container statuses recorded) Jun 17 22:19:34.387: INFO: Container config-reloader ready: true, restart count 0 Jun 17 22:19:34.387: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 17 22:19:34.387: INFO: Container grafana ready: true, restart count 0 Jun 17 22:19:34.387: INFO: Container prometheus ready: true, restart count 1 Jun 17 22:19:34.387: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv from monitoring started at 2022-06-17 20:17:57 +0000 UTC (1 container statuses recorded) Jun 17 22:19:34.387: INFO: Container tas-extender ready: true, restart count 0 Jun 17 22:19:34.387: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 17 22:19:34.395: INFO: cmk-5gtjq from kube-system started at 2022-06-17 20:13:52 +0000 UTC (2 container statuses recorded) Jun 17 22:19:34.395: INFO: Container nodereport ready: true, restart count 0 Jun 17 22:19:34.395: INFO: Container reconcile ready: true, restart count 0 Jun 17 22:19:34.395: INFO: cmk-init-discover-node2-z2vgz from kube-system started at 2022-06-17 20:13:25 +0000 UTC (3 container statuses recorded) Jun 17 22:19:34.395: INFO: Container discover ready: false, restart count 0 Jun 17 22:19:34.395: INFO: Container init ready: false, restart count 0 Jun 17 22:19:34.395: INFO: Container install ready: false, restart count 0 Jun 17 22:19:34.395: INFO: kube-flannel-plbl8 from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 22:19:34.395: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:19:34.395: INFO: kube-multus-ds-amd64-hblk4 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 22:19:34.395: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:19:34.395: INFO: kube-proxy-pvtj6 from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 22:19:34.395: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:19:34.395: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 22:19:34.395: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 17 22:19:34.395: INFO: nginx-proxy-node2 from kube-system started at 2022-06-17 20:00:37 +0000 UTC (1 container statuses recorded) Jun 17 22:19:34.395: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 22:19:34.395: INFO: node-feature-discovery-worker-82r46 from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 22:19:34.395: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 22:19:34.395: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 22:19:34.395: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 22:19:34.395: INFO: collectd-6bcqz from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 22:19:34.395: INFO: Container collectd ready: true, restart count 0 Jun 17 22:19:34.395: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 22:19:34.395: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 22:19:34.395: INFO: node-exporter-xgz6d from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 22:19:34.395: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:19:34.395: INFO: Container node-exporter ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: verifying the node has the label node node1 STEP: verifying the node has the label node node2 Jun 17 22:19:34.452: INFO: Pod cmk-5gtjq requesting resource cpu=0m on Node node2 Jun 17 22:19:34.452: INFO: Pod cmk-webhook-6c9d5f8578-qcmrd requesting resource cpu=0m on Node node1 Jun 17 22:19:34.452: INFO: Pod cmk-xh247 requesting resource cpu=0m on Node node1 Jun 17 22:19:34.452: INFO: Pod kube-flannel-plbl8 requesting resource cpu=150m on Node node2 Jun 17 22:19:34.452: INFO: Pod kube-flannel-wqcwq requesting resource cpu=150m on Node node1 Jun 17 22:19:34.452: INFO: Pod kube-multus-ds-amd64-hblk4 requesting resource cpu=100m on Node node2 Jun 17 22:19:34.452: INFO: Pod kube-multus-ds-amd64-m6vf8 requesting resource cpu=100m on Node node1 Jun 17 22:19:34.452: INFO: Pod kube-proxy-pvtj6 requesting resource cpu=0m on Node node2 Jun 17 22:19:34.452: INFO: Pod kube-proxy-t4lqk requesting resource cpu=0m on Node node1 Jun 17 22:19:34.452: INFO: Pod kubernetes-dashboard-785dcbb76d-26kg6 requesting resource cpu=50m on Node node1 Jun 17 22:19:34.452: INFO: Pod kubernetes-metrics-scraper-5558854cb-w4nk8 requesting resource cpu=0m on Node node2 Jun 17 22:19:34.452: INFO: Pod nginx-proxy-node1 requesting resource cpu=25m on Node node1 Jun 17 22:19:34.452: INFO: Pod nginx-proxy-node2 requesting resource cpu=25m on Node node2 Jun 17 22:19:34.452: INFO: Pod node-feature-discovery-worker-82r46 requesting resource cpu=0m on Node node2 Jun 17 22:19:34.452: INFO: Pod node-feature-discovery-worker-dgp4b requesting resource cpu=0m on Node node1 Jun 17 22:19:34.452: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 requesting resource cpu=0m on Node node1 Jun 17 22:19:34.452: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 requesting resource cpu=0m on Node node2 Jun 17 22:19:34.452: INFO: Pod collectd-5src2 requesting resource cpu=0m on Node node1 Jun 17 22:19:34.452: INFO: Pod collectd-6bcqz requesting resource cpu=0m on Node node2 Jun 17 22:19:34.452: INFO: Pod node-exporter-8ftgl requesting resource cpu=112m on Node node1 Jun 17 22:19:34.452: INFO: Pod node-exporter-xgz6d requesting resource cpu=112m on Node node2 Jun 17 22:19:34.452: INFO: Pod prometheus-k8s-0 requesting resource cpu=200m on Node node1 Jun 17 22:19:34.452: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-tbvjv requesting resource cpu=0m on Node node1 STEP: Starting Pods to consume most of the cluster CPU. Jun 17 22:19:34.452: INFO: Creating a pod which consumes cpu=53454m on Node node1 Jun 17 22:19:34.463: INFO: Creating a pod which consumes cpu=53629m on Node node2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-e9daee53-30ca-4422-ba48-b3a2f078e202.16f9887514cba0dc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8989/filler-pod-e9daee53-30ca-4422-ba48-b3a2f078e202 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-e9daee53-30ca-4422-ba48-b3a2f078e202.16f98875691b0ec6], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-e9daee53-30ca-4422-ba48-b3a2f078e202.16f988757b25d4a7], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 302.683821ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-e9daee53-30ca-4422-ba48-b3a2f078e202.16f98875823a2893], Reason = [Created], Message = [Created container filler-pod-e9daee53-30ca-4422-ba48-b3a2f078e202] STEP: Considering event: Type = [Normal], Name = [filler-pod-e9daee53-30ca-4422-ba48-b3a2f078e202.16f98875891f8dee], Reason = [Started], Message = [Started container filler-pod-e9daee53-30ca-4422-ba48-b3a2f078e202] STEP: Considering event: Type = [Normal], Name = [filler-pod-ed722489-fe68-49d8-9c75-e0914f9283a0.16f98875144019bc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8989/filler-pod-ed722489-fe68-49d8-9c75-e0914f9283a0 to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-ed722489-fe68-49d8-9c75-e0914f9283a0.16f9887572be67aa], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-ed722489-fe68-49d8-9c75-e0914f9283a0.16f9887583ed7c81], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 288.291729ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-ed722489-fe68-49d8-9c75-e0914f9283a0.16f988758b526cf4], Reason = [Created], Message = [Created container filler-pod-ed722489-fe68-49d8-9c75-e0914f9283a0] STEP: Considering event: Type = [Normal], Name = [filler-pod-ed722489-fe68-49d8-9c75-e0914f9283a0.16f9887591e100aa], Reason = [Started], Message = [Started container filler-pod-ed722489-fe68-49d8-9c75-e0914f9283a0] STEP: Considering event: Type = [Warning], Name = [additional-pod.16f9887604bcdb9d], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: removing the label node off the node node1 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node node2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:19:39.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8989" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:5.194 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":17,"completed":11,"skipped":3730,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:19:39.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 17 22:19:39.564: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 17 22:19:39.573: INFO: Waiting for terminating namespaces to be deleted... Jun 17 22:19:39.575: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 17 22:19:39.585: INFO: cmk-init-discover-node1-bvmrv from kube-system started at 2022-06-17 20:13:02 +0000 UTC (3 container statuses recorded) Jun 17 22:19:39.585: INFO: Container discover ready: false, restart count 0 Jun 17 22:19:39.585: INFO: Container init ready: false, restart count 0 Jun 17 22:19:39.585: INFO: Container install ready: false, restart count 0 Jun 17 22:19:39.585: INFO: cmk-webhook-6c9d5f8578-qcmrd from kube-system started at 2022-06-17 20:13:52 +0000 UTC (1 container statuses recorded) Jun 17 22:19:39.585: INFO: Container cmk-webhook ready: true, restart count 0 Jun 17 22:19:39.585: INFO: cmk-xh247 from kube-system started at 2022-06-17 20:13:51 +0000 UTC (2 container statuses recorded) Jun 17 22:19:39.585: INFO: Container nodereport ready: true, restart count 0 Jun 17 22:19:39.586: INFO: Container reconcile ready: true, restart count 0 Jun 17 22:19:39.586: INFO: kube-flannel-wqcwq from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 22:19:39.586: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:19:39.586: INFO: kube-multus-ds-amd64-m6vf8 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 22:19:39.586: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:19:39.586: INFO: kube-proxy-t4lqk from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 22:19:39.586: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:19:39.586: INFO: kubernetes-dashboard-785dcbb76d-26kg6 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 22:19:39.586: INFO: Container kubernetes-dashboard ready: true, restart count 2 Jun 17 22:19:39.586: INFO: nginx-proxy-node1 from kube-system started at 2022-06-17 20:00:39 +0000 UTC (1 container statuses recorded) Jun 17 22:19:39.586: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 22:19:39.586: INFO: node-feature-discovery-worker-dgp4b from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 22:19:39.586: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 22:19:39.586: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 22:19:39.586: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 22:19:39.586: INFO: collectd-5src2 from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 22:19:39.586: INFO: Container collectd ready: true, restart count 0 Jun 17 22:19:39.586: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 22:19:39.586: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 22:19:39.586: INFO: node-exporter-8ftgl from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 22:19:39.586: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:19:39.586: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:19:39.586: INFO: prometheus-k8s-0 from monitoring started at 2022-06-17 20:14:56 +0000 UTC (4 container statuses recorded) Jun 17 22:19:39.586: INFO: Container config-reloader ready: true, restart count 0 Jun 17 22:19:39.586: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 17 22:19:39.586: INFO: Container grafana ready: true, restart count 0 Jun 17 22:19:39.586: INFO: Container prometheus ready: true, restart count 1 Jun 17 22:19:39.586: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv from monitoring started at 2022-06-17 20:17:57 +0000 UTC (1 container statuses recorded) Jun 17 22:19:39.586: INFO: Container tas-extender ready: true, restart count 0 Jun 17 22:19:39.586: INFO: filler-pod-ed722489-fe68-49d8-9c75-e0914f9283a0 from sched-pred-8989 started at 2022-06-17 22:19:34 +0000 UTC (1 container statuses recorded) Jun 17 22:19:39.586: INFO: Container filler-pod-ed722489-fe68-49d8-9c75-e0914f9283a0 ready: true, restart count 0 Jun 17 22:19:39.586: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 17 22:19:39.593: INFO: cmk-5gtjq from kube-system started at 2022-06-17 20:13:52 +0000 UTC (2 container statuses recorded) Jun 17 22:19:39.593: INFO: Container nodereport ready: true, restart count 0 Jun 17 22:19:39.593: INFO: Container reconcile ready: true, restart count 0 Jun 17 22:19:39.593: INFO: cmk-init-discover-node2-z2vgz from kube-system started at 2022-06-17 20:13:25 +0000 UTC (3 container statuses recorded) Jun 17 22:19:39.593: INFO: Container discover ready: false, restart count 0 Jun 17 22:19:39.593: INFO: Container init ready: false, restart count 0 Jun 17 22:19:39.593: INFO: Container install ready: false, restart count 0 Jun 17 22:19:39.593: INFO: kube-flannel-plbl8 from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 22:19:39.593: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:19:39.594: INFO: kube-multus-ds-amd64-hblk4 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 22:19:39.594: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:19:39.594: INFO: kube-proxy-pvtj6 from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 22:19:39.594: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:19:39.594: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 22:19:39.594: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 17 22:19:39.594: INFO: nginx-proxy-node2 from kube-system started at 2022-06-17 20:00:37 +0000 UTC (1 container statuses recorded) Jun 17 22:19:39.594: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 22:19:39.594: INFO: node-feature-discovery-worker-82r46 from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 22:19:39.594: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 22:19:39.594: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 22:19:39.594: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 22:19:39.594: INFO: collectd-6bcqz from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 22:19:39.594: INFO: Container collectd ready: true, restart count 0 Jun 17 22:19:39.594: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 22:19:39.594: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 22:19:39.594: INFO: node-exporter-xgz6d from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 22:19:39.594: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:19:39.594: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:19:39.594: INFO: filler-pod-e9daee53-30ca-4422-ba48-b3a2f078e202 from sched-pred-8989 started at 2022-06-17 22:19:34 +0000 UTC (1 container statuses recorded) Jun 17 22:19:39.594: INFO: Container filler-pod-e9daee53-30ca-4422-ba48-b3a2f078e202 ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-845549e6-98e1-443e-a922-2bcb9cfc41df 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-845549e6-98e1-443e-a922-2bcb9cfc41df off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-845549e6-98e1-443e-a922-2bcb9cfc41df [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:19:47.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3875" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.150 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":17,"completed":12,"skipped":3932,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:19:47.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 17 22:19:48.002: INFO: Pod name wrapped-volume-race-cfef0bdc-a284-4984-ad42-f47016864ab7: Found 3 pods out of 5 Jun 17 22:19:53.017: INFO: Pod name wrapped-volume-race-cfef0bdc-a284-4984-ad42-f47016864ab7: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-cfef0bdc-a284-4984-ad42-f47016864ab7 in namespace emptydir-wrapper-3088, will wait for the garbage collector to delete the pods Jun 17 22:20:07.115: INFO: Deleting ReplicationController wrapped-volume-race-cfef0bdc-a284-4984-ad42-f47016864ab7 took: 4.96437ms Jun 17 22:20:07.215: INFO: Terminating ReplicationController wrapped-volume-race-cfef0bdc-a284-4984-ad42-f47016864ab7 pods took: 100.799719ms STEP: Creating RC which spawns configmap-volume pods Jun 17 22:20:18.533: INFO: Pod name wrapped-volume-race-0dac71fa-8979-4c01-b204-847b11fb8ea9: Found 0 pods out of 5 Jun 17 22:20:23.549: INFO: Pod name wrapped-volume-race-0dac71fa-8979-4c01-b204-847b11fb8ea9: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0dac71fa-8979-4c01-b204-847b11fb8ea9 in namespace emptydir-wrapper-3088, will wait for the garbage collector to delete the pods Jun 17 22:20:37.632: INFO: Deleting ReplicationController wrapped-volume-race-0dac71fa-8979-4c01-b204-847b11fb8ea9 took: 7.226472ms Jun 17 22:20:37.733: INFO: Terminating ReplicationController wrapped-volume-race-0dac71fa-8979-4c01-b204-847b11fb8ea9 pods took: 101.247044ms STEP: Creating RC which spawns configmap-volume pods Jun 17 22:20:49.452: INFO: Pod name wrapped-volume-race-c3208689-9e2b-4841-aa72-674d22e06939: Found 0 pods out of 5 Jun 17 22:20:54.460: INFO: Pod name wrapped-volume-race-c3208689-9e2b-4841-aa72-674d22e06939: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c3208689-9e2b-4841-aa72-674d22e06939 in namespace emptydir-wrapper-3088, will wait for the garbage collector to delete the pods Jun 17 22:21:08.548: INFO: Deleting ReplicationController wrapped-volume-race-c3208689-9e2b-4841-aa72-674d22e06939 took: 5.54542ms Jun 17 22:21:08.649: INFO: Terminating ReplicationController wrapped-volume-race-c3208689-9e2b-4841-aa72-674d22e06939 pods took: 100.914158ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:21:18.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3088" for this suite. • [SLOW TEST:90.922 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":17,"completed":13,"skipped":3958,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:21:18.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 17 22:21:18.654: INFO: Waiting up to 1m0s for all nodes to be ready Jun 17 22:22:18.706: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Jun 17 22:22:18.732: INFO: Created pod: pod0-sched-preemption-low-priority Jun 17 22:22:18.752: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:22:52.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2856" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:94.216 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":17,"completed":14,"skipped":4173,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:22:52.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 17 22:22:52.891: INFO: Waiting up to 1m0s for all nodes to be ready Jun 17 22:23:52.948: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:23:52.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Jun 17 22:23:57.006: INFO: found a healthy node: node2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:24:11.071: INFO: pods created so far: [1 1 1] Jun 17 22:24:11.071: INFO: length of pods created so far: 3 Jun 17 22:24:25.087: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:24:32.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-8599" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:24:32.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7353" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:99.320 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":17,"completed":15,"skipped":5427,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:24:32.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 17 22:24:32.224: INFO: Create a RollingUpdate DaemonSet Jun 17 22:24:32.229: INFO: Check that daemon pods launch on every node of the cluster Jun 17 22:24:32.235: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:32.235: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:32.235: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:32.240: INFO: Number of nodes with available pods: 0 Jun 17 22:24:32.240: INFO: Node node1 is running more than one daemon pod Jun 17 22:24:33.246: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:33.246: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:33.246: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:33.250: INFO: Number of nodes with available pods: 0 Jun 17 22:24:33.250: INFO: Node node1 is running more than one daemon pod Jun 17 22:24:34.245: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:34.245: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:34.245: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:34.248: INFO: Number of nodes with available pods: 0 Jun 17 22:24:34.248: INFO: Node node1 is running more than one daemon pod Jun 17 22:24:35.247: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:35.247: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:35.247: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:35.251: INFO: Number of nodes with available pods: 0 Jun 17 22:24:35.251: INFO: Node node1 is running more than one daemon pod Jun 17 22:24:36.247: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:36.247: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:36.247: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:36.252: INFO: Number of nodes with available pods: 1 Jun 17 22:24:36.252: INFO: Node node1 is running more than one daemon pod Jun 17 22:24:37.246: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:37.246: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:37.246: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:37.249: INFO: Number of nodes with available pods: 1 Jun 17 22:24:37.249: INFO: Node node1 is running more than one daemon pod Jun 17 22:24:38.249: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:38.249: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:38.249: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:38.252: INFO: Number of nodes with available pods: 1 Jun 17 22:24:38.252: INFO: Node node1 is running more than one daemon pod Jun 17 22:24:39.245: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:39.245: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:39.245: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:39.247: INFO: Number of nodes with available pods: 2 Jun 17 22:24:39.247: INFO: Number of running nodes: 2, number of available pods: 2 Jun 17 22:24:39.247: INFO: Update the DaemonSet to trigger a rollout Jun 17 22:24:39.255: INFO: Updating DaemonSet daemon-set Jun 17 22:24:50.268: INFO: Roll back the DaemonSet before rollout is complete Jun 17 22:24:50.277: INFO: Updating DaemonSet daemon-set Jun 17 22:24:50.277: INFO: Make sure DaemonSet rollback is complete Jun 17 22:24:50.279: INFO: Wrong image for pod: daemon-set-rdqcx. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Jun 17 22:24:50.279: INFO: Pod daemon-set-rdqcx is not available Jun 17 22:24:50.283: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:50.284: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:50.284: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:51.292: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:51.292: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:51.292: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:52.293: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:52.293: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:52.293: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:53.292: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:53.293: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:53.293: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:54.294: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:54.294: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:54.294: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:55.293: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:55.294: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:55.294: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:56.294: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:56.294: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:56.294: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:57.292: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:57.292: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:57.292: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:58.294: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:58.294: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:58.294: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:59.292: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:59.292: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:24:59.292: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:25:00.291: INFO: Pod daemon-set-25lpm is not available Jun 17 22:25:00.295: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:25:00.295: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 17 22:25:00.295: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3315, will wait for the garbage collector to delete the pods Jun 17 22:25:00.357: INFO: Deleting DaemonSet.extensions daemon-set took: 4.605372ms Jun 17 22:25:00.558: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.828504ms Jun 17 22:25:09.362: INFO: Number of nodes with available pods: 0 Jun 17 22:25:09.362: INFO: Number of running nodes: 0, number of available pods: 0 Jun 17 22:25:09.365: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"55211"},"items":null} Jun 17 22:25:09.367: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"55211"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:25:09.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3315" for this suite. • [SLOW TEST:37.208 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":17,"completed":16,"skipped":5681,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 22:25:09.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 17 22:25:09.411: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 17 22:25:09.419: INFO: Waiting for terminating namespaces to be deleted... Jun 17 22:25:09.422: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 17 22:25:09.431: INFO: cmk-init-discover-node1-bvmrv from kube-system started at 2022-06-17 20:13:02 +0000 UTC (3 container statuses recorded) Jun 17 22:25:09.431: INFO: Container discover ready: false, restart count 0 Jun 17 22:25:09.431: INFO: Container init ready: false, restart count 0 Jun 17 22:25:09.431: INFO: Container install ready: false, restart count 0 Jun 17 22:25:09.431: INFO: cmk-webhook-6c9d5f8578-qcmrd from kube-system started at 2022-06-17 20:13:52 +0000 UTC (1 container statuses recorded) Jun 17 22:25:09.431: INFO: Container cmk-webhook ready: true, restart count 0 Jun 17 22:25:09.431: INFO: cmk-xh247 from kube-system started at 2022-06-17 20:13:51 +0000 UTC (2 container statuses recorded) Jun 17 22:25:09.431: INFO: Container nodereport ready: true, restart count 0 Jun 17 22:25:09.431: INFO: Container reconcile ready: true, restart count 0 Jun 17 22:25:09.431: INFO: kube-flannel-wqcwq from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 22:25:09.431: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:25:09.431: INFO: kube-multus-ds-amd64-m6vf8 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 22:25:09.431: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:25:09.431: INFO: kube-proxy-t4lqk from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 22:25:09.431: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:25:09.431: INFO: kubernetes-dashboard-785dcbb76d-26kg6 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 22:25:09.431: INFO: Container kubernetes-dashboard ready: true, restart count 2 Jun 17 22:25:09.431: INFO: nginx-proxy-node1 from kube-system started at 2022-06-17 20:00:39 +0000 UTC (1 container statuses recorded) Jun 17 22:25:09.431: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 22:25:09.431: INFO: node-feature-discovery-worker-dgp4b from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 22:25:09.431: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 22:25:09.431: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 22:25:09.431: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 22:25:09.431: INFO: collectd-5src2 from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 22:25:09.431: INFO: Container collectd ready: true, restart count 0 Jun 17 22:25:09.431: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 22:25:09.431: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 22:25:09.431: INFO: node-exporter-8ftgl from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 22:25:09.431: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:25:09.431: INFO: Container node-exporter ready: true, restart count 0 Jun 17 22:25:09.431: INFO: prometheus-k8s-0 from monitoring started at 2022-06-17 20:14:56 +0000 UTC (4 container statuses recorded) Jun 17 22:25:09.431: INFO: Container config-reloader ready: true, restart count 0 Jun 17 22:25:09.431: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 17 22:25:09.431: INFO: Container grafana ready: true, restart count 0 Jun 17 22:25:09.431: INFO: Container prometheus ready: true, restart count 1 Jun 17 22:25:09.431: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv from monitoring started at 2022-06-17 20:17:57 +0000 UTC (1 container statuses recorded) Jun 17 22:25:09.431: INFO: Container tas-extender ready: true, restart count 0 Jun 17 22:25:09.431: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 17 22:25:09.437: INFO: cmk-5gtjq from kube-system started at 2022-06-17 20:13:52 +0000 UTC (2 container statuses recorded) Jun 17 22:25:09.437: INFO: Container nodereport ready: true, restart count 0 Jun 17 22:25:09.437: INFO: Container reconcile ready: true, restart count 0 Jun 17 22:25:09.437: INFO: cmk-init-discover-node2-z2vgz from kube-system started at 2022-06-17 20:13:25 +0000 UTC (3 container statuses recorded) Jun 17 22:25:09.437: INFO: Container discover ready: false, restart count 0 Jun 17 22:25:09.437: INFO: Container init ready: false, restart count 0 Jun 17 22:25:09.437: INFO: Container install ready: false, restart count 0 Jun 17 22:25:09.437: INFO: kube-flannel-plbl8 from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 22:25:09.437: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 22:25:09.437: INFO: kube-multus-ds-amd64-hblk4 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 22:25:09.437: INFO: Container kube-multus ready: true, restart count 1 Jun 17 22:25:09.437: INFO: kube-proxy-pvtj6 from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 22:25:09.437: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 22:25:09.437: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 22:25:09.438: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 17 22:25:09.438: INFO: nginx-proxy-node2 from kube-system started at 2022-06-17 20:00:37 +0000 UTC (1 container statuses recorded) Jun 17 22:25:09.438: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 22:25:09.438: INFO: node-feature-discovery-worker-82r46 from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 22:25:09.438: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 22:25:09.438: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 22:25:09.438: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 22:25:09.438: INFO: collectd-6bcqz from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 22:25:09.438: INFO: Container collectd ready: true, restart count 0 Jun 17 22:25:09.438: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 22:25:09.438: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 22:25:09.438: INFO: node-exporter-xgz6d from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 22:25:09.438: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 22:25:09.438: INFO: Container node-exporter ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e3ac7499-9199-4822-afdc-8f75b15777d3 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.10.190.208 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-e3ac7499-9199-4822-afdc-8f75b15777d3 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-e3ac7499-9199-4822-afdc-8f75b15777d3 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 22:30:19.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2471" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:310.155 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":17,"completed":17,"skipped":5682,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJun 17 22:30:19.543: INFO: Running AfterSuite actions on all nodes Jun 17 22:30:19.543: INFO: Running AfterSuite actions on node 1 Jun 17 22:30:19.543: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":17,"skipped":5756,"failed":0} Ran 17 of 5773 Specs in 919.178 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5756 Skipped PASS Ginkgo ran 1 suite in 15m20.578871755s Test Suite Passed