I0513 22:15:01.498632 23 e2e.go:129] Starting e2e run "4fbe2901-ef39-41a3-909c-28ce9b742807" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1652480100 - Will randomize all specs Will run 17 of 5773 specs May 13 22:15:01.566: INFO: >>> kubeConfig: /root/.kube/config May 13 22:15:01.571: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 13 22:15:01.592: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 13 22:15:01.664: INFO: The status of Pod cmk-init-discover-node1-m2p59 is Succeeded, skipping waiting May 13 22:15:01.664: INFO: The status of Pod cmk-init-discover-node2-hm7r7 is Succeeded, skipping waiting May 13 22:15:01.664: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 13 22:15:01.664: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 13 22:15:01.664: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 13 22:15:01.678: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 13 22:15:01.678: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 13 22:15:01.678: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 13 22:15:01.678: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 13 22:15:01.678: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 13 22:15:01.678: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 13 22:15:01.678: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 13 22:15:01.678: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 13 22:15:01.678: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 13 22:15:01.678: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 13 22:15:01.678: INFO: e2e test version: v1.21.9 May 13 22:15:01.679: INFO: kube-apiserver version: v1.21.1 May 13 22:15:01.679: INFO: >>> kubeConfig: /root/.kube/config May 13 22:15:01.686: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:15:01.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces W0513 22:15:01.728146 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 22:15:01.728: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 22:15:01.731: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:15:32.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9894" for this suite. STEP: Destroying namespace "nsdeletetest-848" for this suite. May 13 22:15:32.826: INFO: Namespace nsdeletetest-848 was already deleted STEP: Destroying namespace "nsdeletetest-380" for this suite. • [SLOW TEST:31.131 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":17,"completed":1,"skipped":952,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:15:32.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 13 22:15:32.873: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:15:32.873: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:15:32.873: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:15:32.875: INFO: Number of nodes with available pods: 0 May 13 22:15:32.876: INFO: Node node1 is running more than one daemon pod May 13 22:15:33.880: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:15:33.881: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:15:33.881: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:15:33.883: INFO: Number of nodes with available pods: 0 May 13 22:15:33.883: INFO: Node node1 is running more than one daemon pod May 13 22:15:34.881: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:15:34.881: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:15:34.881: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:15:34.884: INFO: Number of nodes with available pods: 0 May 13 22:15:34.884: INFO: Node node1 is running more than one daemon pod May 13 22:15:35.883: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:15:35.883: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:15:35.883: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:15:35.886: INFO: Number of nodes with available pods: 1 May 13 22:15:35.887: INFO: Node node1 is running more than one daemon pod May 13 22:15:36.883: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:15:36.883: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:15:36.883: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:15:36.886: INFO: Number of nodes with available pods: 2 May 13 22:15:36.886: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 13 22:15:36.901: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:15:36.901: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:15:36.901: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:15:36.903: INFO: Number of nodes with available pods: 2 May 13 22:15:36.903: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7776, will wait for the garbage collector to delete the pods May 13 22:15:37.971: INFO: Deleting DaemonSet.extensions daemon-set took: 3.914736ms May 13 22:15:38.071: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.174929ms May 13 22:15:52.375: INFO: Number of nodes with available pods: 0 May 13 22:15:52.375: INFO: Number of running nodes: 0, number of available pods: 0 May 13 22:15:52.381: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"51844"},"items":null} May 13 22:15:52.383: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"51844"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:15:52.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7776" for this suite. • [SLOW TEST:19.577 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":17,"completed":2,"skipped":965,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:15:52.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 13 22:15:52.446: INFO: Waiting up to 1m0s for all nodes to be ready May 13 22:16:52.495: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. May 13 22:16:52.528: INFO: Created pod: pod0-sched-preemption-low-priority May 13 22:16:52.548: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:17:16.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1129" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:84.230 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":17,"completed":3,"skipped":1575,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:17:16.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:17:16.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7882" for this suite. STEP: Destroying namespace "nspatchtest-0d3a5352-dd02-420f-917a-3c0d906eba30-299" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":17,"completed":4,"skipped":1744,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:17:16.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:17:22.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2391" for this suite. STEP: Destroying namespace "nsdeletetest-1630" for this suite. May 13 22:17:22.817: INFO: Namespace nsdeletetest-1630 was already deleted STEP: Destroying namespace "nsdeletetest-9163" for this suite. • [SLOW TEST:6.091 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":17,"completed":5,"skipped":1852,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:17:22.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:17:22.868: INFO: Create a RollingUpdate DaemonSet May 13 22:17:22.871: INFO: Check that daemon pods launch on every node of the cluster May 13 22:17:22.875: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:22.875: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:22.875: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:22.877: INFO: Number of nodes with available pods: 0 May 13 22:17:22.878: INFO: Node node1 is running more than one daemon pod May 13 22:17:23.884: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:23.884: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:23.884: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:23.892: INFO: Number of nodes with available pods: 0 May 13 22:17:23.892: INFO: Node node1 is running more than one daemon pod May 13 22:17:24.884: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:24.884: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:24.884: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:24.887: INFO: Number of nodes with available pods: 0 May 13 22:17:24.887: INFO: Node node1 is running more than one daemon pod May 13 22:17:25.883: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:25.883: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:25.883: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:25.886: INFO: Number of nodes with available pods: 1 May 13 22:17:25.886: INFO: Node node1 is running more than one daemon pod May 13 22:17:26.882: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:26.882: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:26.882: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:26.885: INFO: Number of nodes with available pods: 2 May 13 22:17:26.885: INFO: Number of running nodes: 2, number of available pods: 2 May 13 22:17:26.885: INFO: Update the DaemonSet to trigger a rollout May 13 22:17:26.893: INFO: Updating DaemonSet daemon-set May 13 22:17:32.909: INFO: Roll back the DaemonSet before rollout is complete May 13 22:17:32.919: INFO: Updating DaemonSet daemon-set May 13 22:17:32.919: INFO: Make sure DaemonSet rollback is complete May 13 22:17:32.922: INFO: Wrong image for pod: daemon-set-v5bq5. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. May 13 22:17:32.922: INFO: Pod daemon-set-v5bq5 is not available May 13 22:17:32.926: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:32.927: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:32.927: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:33.935: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:33.935: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:33.935: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:34.937: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:34.937: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:34.937: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:35.931: INFO: Pod daemon-set-p44j2 is not available May 13 22:17:35.935: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:35.935: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:17:35.935: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9719, will wait for the garbage collector to delete the pods May 13 22:17:35.999: INFO: Deleting DaemonSet.extensions daemon-set took: 6.494063ms May 13 22:17:36.099: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.15794ms May 13 22:17:42.403: INFO: Number of nodes with available pods: 0 May 13 22:17:42.403: INFO: Number of running nodes: 0, number of available pods: 0 May 13 22:17:42.406: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"52387"},"items":null} May 13 22:17:42.408: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"52387"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:17:42.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9719" for this suite. • [SLOW TEST:19.609 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":17,"completed":6,"skipped":1877,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:17:42.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 13 22:17:42.458: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 13 22:17:42.466: INFO: Waiting for terminating namespaces to be deleted... May 13 22:17:42.468: INFO: Logging pods the apiserver thinks is on node node1 before test May 13 22:17:42.475: INFO: cmk-init-discover-node1-m2p59 from kube-system started at 2022-05-13 20:12:33 +0000 UTC (3 container statuses recorded) May 13 22:17:42.475: INFO: Container discover ready: false, restart count 0 May 13 22:17:42.475: INFO: Container init ready: false, restart count 0 May 13 22:17:42.475: INFO: Container install ready: false, restart count 0 May 13 22:17:42.475: INFO: cmk-tfblh from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 13 22:17:42.475: INFO: Container nodereport ready: true, restart count 0 May 13 22:17:42.475: INFO: Container reconcile ready: true, restart count 0 May 13 22:17:42.475: INFO: cmk-webhook-6c9d5f8578-59hj6 from kube-system started at 2022-05-13 20:13:16 +0000 UTC (1 container statuses recorded) May 13 22:17:42.475: INFO: Container cmk-webhook ready: true, restart count 0 May 13 22:17:42.475: INFO: kube-flannel-xfj7m from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 13 22:17:42.475: INFO: Container kube-flannel ready: true, restart count 2 May 13 22:17:42.475: INFO: kube-multus-ds-amd64-dtt2x from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 13 22:17:42.475: INFO: Container kube-multus ready: true, restart count 1 May 13 22:17:42.475: INFO: kube-proxy-rs2zg from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 13 22:17:42.475: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:17:42.475: INFO: kubernetes-dashboard-785dcbb76d-tcgth from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 13 22:17:42.475: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 13 22:17:42.475: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 13 22:17:42.475: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 13 22:17:42.475: INFO: nginx-proxy-node1 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 13 22:17:42.475: INFO: Container nginx-proxy ready: true, restart count 2 May 13 22:17:42.475: INFO: node-feature-discovery-worker-l459c from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 13 22:17:42.475: INFO: Container nfd-worker ready: true, restart count 0 May 13 22:17:42.475: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 13 22:17:42.475: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 22:17:42.475: INFO: collectd-p26j2 from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 13 22:17:42.475: INFO: Container collectd ready: true, restart count 0 May 13 22:17:42.475: INFO: Container collectd-exporter ready: true, restart count 0 May 13 22:17:42.475: INFO: Container rbac-proxy ready: true, restart count 0 May 13 22:17:42.475: INFO: node-exporter-42x8d from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 13 22:17:42.475: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:17:42.475: INFO: Container node-exporter ready: true, restart count 0 May 13 22:17:42.475: INFO: prometheus-k8s-0 from monitoring started at 2022-05-13 20:14:32 +0000 UTC (4 container statuses recorded) May 13 22:17:42.475: INFO: Container config-reloader ready: true, restart count 0 May 13 22:17:42.475: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 13 22:17:42.475: INFO: Container grafana ready: true, restart count 0 May 13 22:17:42.475: INFO: Container prometheus ready: true, restart count 1 May 13 22:17:42.475: INFO: Logging pods the apiserver thinks is on node node2 before test May 13 22:17:42.485: INFO: cmk-init-discover-node2-hm7r7 from kube-system started at 2022-05-13 20:12:52 +0000 UTC (3 container statuses recorded) May 13 22:17:42.485: INFO: Container discover ready: false, restart count 0 May 13 22:17:42.485: INFO: Container init ready: false, restart count 0 May 13 22:17:42.485: INFO: Container install ready: false, restart count 0 May 13 22:17:42.485: INFO: cmk-qhbd6 from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 13 22:17:42.485: INFO: Container nodereport ready: true, restart count 0 May 13 22:17:42.485: INFO: Container reconcile ready: true, restart count 0 May 13 22:17:42.485: INFO: kube-flannel-lv9xf from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 13 22:17:42.485: INFO: Container kube-flannel ready: true, restart count 2 May 13 22:17:42.485: INFO: kube-multus-ds-amd64-l7nx2 from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 13 22:17:42.485: INFO: Container kube-multus ready: true, restart count 1 May 13 22:17:42.485: INFO: kube-proxy-wkzbm from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 13 22:17:42.485: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:17:42.485: INFO: nginx-proxy-node2 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 13 22:17:42.485: INFO: Container nginx-proxy ready: true, restart count 2 May 13 22:17:42.485: INFO: node-feature-discovery-worker-cxxqf from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 13 22:17:42.485: INFO: Container nfd-worker ready: true, restart count 0 May 13 22:17:42.485: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 13 22:17:42.485: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 22:17:42.485: INFO: collectd-9gqhr from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 13 22:17:42.485: INFO: Container collectd ready: true, restart count 0 May 13 22:17:42.485: INFO: Container collectd-exporter ready: true, restart count 0 May 13 22:17:42.485: INFO: Container rbac-proxy ready: true, restart count 0 May 13 22:17:42.485: INFO: node-exporter-n5snd from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 13 22:17:42.485: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:17:42.485: INFO: Container node-exporter ready: true, restart count 0 May 13 22:17:42.485: INFO: prometheus-operator-585ccfb458-vrwnp from monitoring started at 2022-05-13 20:14:11 +0000 UTC (2 container statuses recorded) May 13 22:17:42.485: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:17:42.485: INFO: Container prometheus-operator ready: true, restart count 0 May 13 22:17:42.485: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 from monitoring started at 2022-05-13 20:17:23 +0000 UTC (1 container statuses recorded) May 13 22:17:42.485: INFO: Container tas-extender ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: verifying the node has the label node node1 STEP: verifying the node has the label node node2 May 13 22:17:42.537: INFO: Pod cmk-qhbd6 requesting resource cpu=0m on Node node2 May 13 22:17:42.537: INFO: Pod cmk-tfblh requesting resource cpu=0m on Node node1 May 13 22:17:42.537: INFO: Pod cmk-webhook-6c9d5f8578-59hj6 requesting resource cpu=0m on Node node1 May 13 22:17:42.538: INFO: Pod kube-flannel-lv9xf requesting resource cpu=150m on Node node2 May 13 22:17:42.538: INFO: Pod kube-flannel-xfj7m requesting resource cpu=150m on Node node1 May 13 22:17:42.538: INFO: Pod kube-multus-ds-amd64-dtt2x requesting resource cpu=100m on Node node1 May 13 22:17:42.538: INFO: Pod kube-multus-ds-amd64-l7nx2 requesting resource cpu=100m on Node node2 May 13 22:17:42.538: INFO: Pod kube-proxy-rs2zg requesting resource cpu=0m on Node node1 May 13 22:17:42.538: INFO: Pod kube-proxy-wkzbm requesting resource cpu=0m on Node node2 May 13 22:17:42.538: INFO: Pod kubernetes-dashboard-785dcbb76d-tcgth requesting resource cpu=50m on Node node1 May 13 22:17:42.538: INFO: Pod kubernetes-metrics-scraper-5558854cb-2bw7v requesting resource cpu=0m on Node node1 May 13 22:17:42.538: INFO: Pod nginx-proxy-node1 requesting resource cpu=25m on Node node1 May 13 22:17:42.538: INFO: Pod nginx-proxy-node2 requesting resource cpu=25m on Node node2 May 13 22:17:42.538: INFO: Pod node-feature-discovery-worker-cxxqf requesting resource cpu=0m on Node node2 May 13 22:17:42.538: INFO: Pod node-feature-discovery-worker-l459c requesting resource cpu=0m on Node node1 May 13 22:17:42.538: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt requesting resource cpu=0m on Node node2 May 13 22:17:42.538: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr requesting resource cpu=0m on Node node1 May 13 22:17:42.538: INFO: Pod collectd-9gqhr requesting resource cpu=0m on Node node2 May 13 22:17:42.538: INFO: Pod collectd-p26j2 requesting resource cpu=0m on Node node1 May 13 22:17:42.538: INFO: Pod node-exporter-42x8d requesting resource cpu=112m on Node node1 May 13 22:17:42.538: INFO: Pod node-exporter-n5snd requesting resource cpu=112m on Node node2 May 13 22:17:42.538: INFO: Pod prometheus-k8s-0 requesting resource cpu=200m on Node node1 May 13 22:17:42.538: INFO: Pod prometheus-operator-585ccfb458-vrwnp requesting resource cpu=100m on Node node2 May 13 22:17:42.538: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 requesting resource cpu=0m on Node node2 STEP: Starting Pods to consume most of the cluster CPU. May 13 22:17:42.538: INFO: Creating a pod which consumes cpu=53454m on Node node1 May 13 22:17:42.550: INFO: Creating a pod which consumes cpu=53559m on Node node2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-06c38ae3-c013-4887-83be-400563317ec1.16eeca0b2861702d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3444/filler-pod-06c38ae3-c013-4887-83be-400563317ec1 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-06c38ae3-c013-4887-83be-400563317ec1.16eeca0b86e560a0], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-06c38ae3-c013-4887-83be-400563317ec1.16eeca0b9a9de939], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 330.852518ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-06c38ae3-c013-4887-83be-400563317ec1.16eeca0ba0b6f661], Reason = [Created], Message = [Created container filler-pod-06c38ae3-c013-4887-83be-400563317ec1] STEP: Considering event: Type = [Normal], Name = [filler-pod-06c38ae3-c013-4887-83be-400563317ec1.16eeca0ba79dc7ba], Reason = [Started], Message = [Started container filler-pod-06c38ae3-c013-4887-83be-400563317ec1] STEP: Considering event: Type = [Normal], Name = [filler-pod-2d120446-ffa1-470c-99fd-109d4395244b.16eeca0b27f1dd60], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3444/filler-pod-2d120446-ffa1-470c-99fd-109d4395244b to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-2d120446-ffa1-470c-99fd-109d4395244b.16eeca0b806f04d4], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-2d120446-ffa1-470c-99fd-109d4395244b.16eeca0b949f6844], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 338.706325ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-2d120446-ffa1-470c-99fd-109d4395244b.16eeca0b9c4133f9], Reason = [Created], Message = [Created container filler-pod-2d120446-ffa1-470c-99fd-109d4395244b] STEP: Considering event: Type = [Normal], Name = [filler-pod-2d120446-ffa1-470c-99fd-109d4395244b.16eeca0ba4469513], Reason = [Started], Message = [Started container filler-pod-2d120446-ffa1-470c-99fd-109d4395244b] STEP: Considering event: Type = [Warning], Name = [additional-pod.16eeca0c181287c2], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: removing the label node off the node node1 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node node2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:17:47.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3444" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:5.188 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":17,"completed":7,"skipped":2094,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:17:47.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 13 22:17:47.645: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 13 22:17:47.653: INFO: Waiting for terminating namespaces to be deleted... May 13 22:17:47.655: INFO: Logging pods the apiserver thinks is on node node1 before test May 13 22:17:47.664: INFO: cmk-init-discover-node1-m2p59 from kube-system started at 2022-05-13 20:12:33 +0000 UTC (3 container statuses recorded) May 13 22:17:47.664: INFO: Container discover ready: false, restart count 0 May 13 22:17:47.664: INFO: Container init ready: false, restart count 0 May 13 22:17:47.664: INFO: Container install ready: false, restart count 0 May 13 22:17:47.664: INFO: cmk-tfblh from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 13 22:17:47.664: INFO: Container nodereport ready: true, restart count 0 May 13 22:17:47.664: INFO: Container reconcile ready: true, restart count 0 May 13 22:17:47.664: INFO: cmk-webhook-6c9d5f8578-59hj6 from kube-system started at 2022-05-13 20:13:16 +0000 UTC (1 container statuses recorded) May 13 22:17:47.664: INFO: Container cmk-webhook ready: true, restart count 0 May 13 22:17:47.664: INFO: kube-flannel-xfj7m from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 13 22:17:47.664: INFO: Container kube-flannel ready: true, restart count 2 May 13 22:17:47.664: INFO: kube-multus-ds-amd64-dtt2x from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 13 22:17:47.664: INFO: Container kube-multus ready: true, restart count 1 May 13 22:17:47.664: INFO: kube-proxy-rs2zg from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 13 22:17:47.664: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:17:47.664: INFO: kubernetes-dashboard-785dcbb76d-tcgth from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 13 22:17:47.664: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 13 22:17:47.664: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 13 22:17:47.664: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 13 22:17:47.664: INFO: nginx-proxy-node1 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 13 22:17:47.664: INFO: Container nginx-proxy ready: true, restart count 2 May 13 22:17:47.664: INFO: node-feature-discovery-worker-l459c from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 13 22:17:47.664: INFO: Container nfd-worker ready: true, restart count 0 May 13 22:17:47.664: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 13 22:17:47.664: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 22:17:47.664: INFO: collectd-p26j2 from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 13 22:17:47.664: INFO: Container collectd ready: true, restart count 0 May 13 22:17:47.664: INFO: Container collectd-exporter ready: true, restart count 0 May 13 22:17:47.664: INFO: Container rbac-proxy ready: true, restart count 0 May 13 22:17:47.664: INFO: node-exporter-42x8d from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 13 22:17:47.664: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:17:47.664: INFO: Container node-exporter ready: true, restart count 0 May 13 22:17:47.664: INFO: prometheus-k8s-0 from monitoring started at 2022-05-13 20:14:32 +0000 UTC (4 container statuses recorded) May 13 22:17:47.664: INFO: Container config-reloader ready: true, restart count 0 May 13 22:17:47.664: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 13 22:17:47.664: INFO: Container grafana ready: true, restart count 0 May 13 22:17:47.664: INFO: Container prometheus ready: true, restart count 1 May 13 22:17:47.664: INFO: filler-pod-2d120446-ffa1-470c-99fd-109d4395244b from sched-pred-3444 started at 2022-05-13 22:17:42 +0000 UTC (1 container statuses recorded) May 13 22:17:47.664: INFO: Container filler-pod-2d120446-ffa1-470c-99fd-109d4395244b ready: true, restart count 0 May 13 22:17:47.664: INFO: Logging pods the apiserver thinks is on node node2 before test May 13 22:17:47.675: INFO: cmk-init-discover-node2-hm7r7 from kube-system started at 2022-05-13 20:12:52 +0000 UTC (3 container statuses recorded) May 13 22:17:47.675: INFO: Container discover ready: false, restart count 0 May 13 22:17:47.675: INFO: Container init ready: false, restart count 0 May 13 22:17:47.675: INFO: Container install ready: false, restart count 0 May 13 22:17:47.675: INFO: cmk-qhbd6 from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 13 22:17:47.675: INFO: Container nodereport ready: true, restart count 0 May 13 22:17:47.675: INFO: Container reconcile ready: true, restart count 0 May 13 22:17:47.675: INFO: kube-flannel-lv9xf from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 13 22:17:47.675: INFO: Container kube-flannel ready: true, restart count 2 May 13 22:17:47.675: INFO: kube-multus-ds-amd64-l7nx2 from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 13 22:17:47.675: INFO: Container kube-multus ready: true, restart count 1 May 13 22:17:47.675: INFO: kube-proxy-wkzbm from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 13 22:17:47.675: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:17:47.675: INFO: nginx-proxy-node2 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 13 22:17:47.675: INFO: Container nginx-proxy ready: true, restart count 2 May 13 22:17:47.675: INFO: node-feature-discovery-worker-cxxqf from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 13 22:17:47.675: INFO: Container nfd-worker ready: true, restart count 0 May 13 22:17:47.675: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 13 22:17:47.675: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 22:17:47.675: INFO: collectd-9gqhr from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 13 22:17:47.675: INFO: Container collectd ready: true, restart count 0 May 13 22:17:47.675: INFO: Container collectd-exporter ready: true, restart count 0 May 13 22:17:47.675: INFO: Container rbac-proxy ready: true, restart count 0 May 13 22:17:47.675: INFO: node-exporter-n5snd from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 13 22:17:47.675: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:17:47.675: INFO: Container node-exporter ready: true, restart count 0 May 13 22:17:47.675: INFO: prometheus-operator-585ccfb458-vrwnp from monitoring started at 2022-05-13 20:14:11 +0000 UTC (2 container statuses recorded) May 13 22:17:47.675: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:17:47.675: INFO: Container prometheus-operator ready: true, restart count 0 May 13 22:17:47.675: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 from monitoring started at 2022-05-13 20:17:23 +0000 UTC (1 container statuses recorded) May 13 22:17:47.675: INFO: Container tas-extender ready: true, restart count 0 May 13 22:17:47.675: INFO: filler-pod-06c38ae3-c013-4887-83be-400563317ec1 from sched-pred-3444 started at 2022-05-13 22:17:42 +0000 UTC (1 container statuses recorded) May 13 22:17:47.675: INFO: Container filler-pod-06c38ae3-c013-4887-83be-400563317ec1 ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-43eab1a0-d6a8-4599-9941-e78e351e1b03 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.10.190.208 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-43eab1a0-d6a8-4599-9941-e78e351e1b03 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-43eab1a0-d6a8-4599-9941-e78e351e1b03 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:22:55.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-460" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.156 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":17,"completed":8,"skipped":2245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:22:55.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 13 22:22:55.822: INFO: Waiting up to 1m0s for all nodes to be ready May 13 22:23:55.877: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. May 13 22:23:55.903: INFO: Created pod: pod0-sched-preemption-low-priority May 13 22:23:55.921: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:24:17.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3274" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:82.209 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":17,"completed":9,"skipped":2660,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:24:18.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 13 22:24:18.056: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:18.056: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:18.056: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:18.062: INFO: Number of nodes with available pods: 0 May 13 22:24:18.062: INFO: Node node1 is running more than one daemon pod May 13 22:24:19.068: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:19.068: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:19.068: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:19.070: INFO: Number of nodes with available pods: 0 May 13 22:24:19.070: INFO: Node node1 is running more than one daemon pod May 13 22:24:20.067: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:20.067: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:20.067: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:20.070: INFO: Number of nodes with available pods: 0 May 13 22:24:20.070: INFO: Node node1 is running more than one daemon pod May 13 22:24:21.069: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:21.069: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:21.069: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:21.072: INFO: Number of nodes with available pods: 2 May 13 22:24:21.072: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 13 22:24:21.086: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:21.086: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:21.086: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:21.088: INFO: Number of nodes with available pods: 1 May 13 22:24:21.088: INFO: Node node2 is running more than one daemon pod May 13 22:24:22.093: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:22.093: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:22.093: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:22.096: INFO: Number of nodes with available pods: 1 May 13 22:24:22.096: INFO: Node node2 is running more than one daemon pod May 13 22:24:23.094: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:23.094: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:23.095: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:23.098: INFO: Number of nodes with available pods: 1 May 13 22:24:23.098: INFO: Node node2 is running more than one daemon pod May 13 22:24:24.095: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:24.095: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:24.095: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:24.097: INFO: Number of nodes with available pods: 1 May 13 22:24:24.097: INFO: Node node2 is running more than one daemon pod May 13 22:24:25.094: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:25.095: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:25.095: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:25.097: INFO: Number of nodes with available pods: 1 May 13 22:24:25.097: INFO: Node node2 is running more than one daemon pod May 13 22:24:26.094: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:26.095: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:26.095: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:26.097: INFO: Number of nodes with available pods: 1 May 13 22:24:26.098: INFO: Node node2 is running more than one daemon pod May 13 22:24:27.093: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:27.093: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:27.093: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:27.095: INFO: Number of nodes with available pods: 2 May 13 22:24:27.095: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6600, will wait for the garbage collector to delete the pods May 13 22:24:27.155: INFO: Deleting DaemonSet.extensions daemon-set took: 3.992051ms May 13 22:24:27.256: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.806179ms May 13 22:24:32.359: INFO: Number of nodes with available pods: 0 May 13 22:24:32.359: INFO: Number of running nodes: 0, number of available pods: 0 May 13 22:24:32.362: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"53707"},"items":null} May 13 22:24:32.363: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"53707"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:24:32.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6600" for this suite. • [SLOW TEST:14.384 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":17,"completed":10,"skipped":2879,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:24:32.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:24:32.427: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 13 22:24:32.434: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:32.434: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:32.434: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:32.437: INFO: Number of nodes with available pods: 0 May 13 22:24:32.437: INFO: Node node1 is running more than one daemon pod May 13 22:24:33.443: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:33.443: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:33.443: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:33.446: INFO: Number of nodes with available pods: 0 May 13 22:24:33.446: INFO: Node node1 is running more than one daemon pod May 13 22:24:34.445: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:34.445: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:34.445: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:34.449: INFO: Number of nodes with available pods: 0 May 13 22:24:34.449: INFO: Node node1 is running more than one daemon pod May 13 22:24:35.442: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:35.442: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:35.442: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:35.444: INFO: Number of nodes with available pods: 2 May 13 22:24:35.444: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 13 22:24:35.471: INFO: Wrong image for pod: daemon-set-2d66r. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 13 22:24:35.471: INFO: Wrong image for pod: daemon-set-qdq94. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 13 22:24:35.476: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:35.476: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:35.476: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:36.482: INFO: Wrong image for pod: daemon-set-qdq94. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 13 22:24:36.485: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:36.485: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:36.485: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:37.481: INFO: Wrong image for pod: daemon-set-qdq94. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 13 22:24:37.485: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:37.485: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:37.485: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:38.482: INFO: Wrong image for pod: daemon-set-qdq94. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 13 22:24:38.486: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:38.486: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:38.486: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:39.482: INFO: Wrong image for pod: daemon-set-qdq94. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 13 22:24:39.488: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:39.488: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:39.488: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:40.480: INFO: Wrong image for pod: daemon-set-qdq94. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 13 22:24:40.485: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:40.485: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:40.485: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:41.482: INFO: Wrong image for pod: daemon-set-qdq94. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 13 22:24:41.486: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:41.486: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:41.486: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:42.481: INFO: Pod daemon-set-lhv4l is not available May 13 22:24:42.481: INFO: Wrong image for pod: daemon-set-qdq94. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 13 22:24:42.485: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:42.486: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:42.486: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:43.482: INFO: Pod daemon-set-lhv4l is not available May 13 22:24:43.482: INFO: Wrong image for pod: daemon-set-qdq94. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 13 22:24:43.486: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:43.486: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:43.486: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:44.482: INFO: Pod daemon-set-lhv4l is not available May 13 22:24:44.482: INFO: Wrong image for pod: daemon-set-qdq94. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 13 22:24:44.485: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:44.485: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:44.485: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:45.480: INFO: Pod daemon-set-lhv4l is not available May 13 22:24:45.480: INFO: Wrong image for pod: daemon-set-qdq94. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 13 22:24:45.485: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:45.485: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:45.485: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:46.486: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:46.486: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:46.486: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:47.485: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:47.485: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:47.485: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:48.490: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:48.490: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:48.491: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:49.483: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:49.483: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:49.483: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:50.486: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:50.486: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:50.486: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:51.486: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:51.486: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:51.486: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:52.482: INFO: Pod daemon-set-xjxml is not available May 13 22:24:52.485: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:52.485: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:52.485: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 13 22:24:52.490: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:52.490: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:52.490: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:52.493: INFO: Number of nodes with available pods: 1 May 13 22:24:52.493: INFO: Node node2 is running more than one daemon pod May 13 22:24:53.499: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:53.499: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:53.499: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:53.502: INFO: Number of nodes with available pods: 1 May 13 22:24:53.502: INFO: Node node2 is running more than one daemon pod May 13 22:24:54.500: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:54.500: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:54.500: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:54.503: INFO: Number of nodes with available pods: 1 May 13 22:24:54.503: INFO: Node node2 is running more than one daemon pod May 13 22:24:55.499: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:55.499: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:55.499: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 13 22:24:55.502: INFO: Number of nodes with available pods: 2 May 13 22:24:55.502: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9764, will wait for the garbage collector to delete the pods May 13 22:24:55.574: INFO: Deleting DaemonSet.extensions daemon-set took: 5.470279ms May 13 22:24:55.675: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.758302ms May 13 22:25:02.378: INFO: Number of nodes with available pods: 0 May 13 22:25:02.378: INFO: Number of running nodes: 0, number of available pods: 0 May 13 22:25:02.380: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"53902"},"items":null} May 13 22:25:02.382: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"53902"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:25:02.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9764" for this suite. • [SLOW TEST:30.011 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":17,"completed":11,"skipped":3163,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:25:02.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 13 22:25:02.432: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 13 22:25:02.442: INFO: Waiting for terminating namespaces to be deleted... May 13 22:25:02.444: INFO: Logging pods the apiserver thinks is on node node1 before test May 13 22:25:02.458: INFO: cmk-init-discover-node1-m2p59 from kube-system started at 2022-05-13 20:12:33 +0000 UTC (3 container statuses recorded) May 13 22:25:02.458: INFO: Container discover ready: false, restart count 0 May 13 22:25:02.458: INFO: Container init ready: false, restart count 0 May 13 22:25:02.458: INFO: Container install ready: false, restart count 0 May 13 22:25:02.458: INFO: cmk-tfblh from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 13 22:25:02.458: INFO: Container nodereport ready: true, restart count 0 May 13 22:25:02.459: INFO: Container reconcile ready: true, restart count 0 May 13 22:25:02.459: INFO: cmk-webhook-6c9d5f8578-59hj6 from kube-system started at 2022-05-13 20:13:16 +0000 UTC (1 container statuses recorded) May 13 22:25:02.459: INFO: Container cmk-webhook ready: true, restart count 0 May 13 22:25:02.459: INFO: kube-flannel-xfj7m from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 13 22:25:02.459: INFO: Container kube-flannel ready: true, restart count 2 May 13 22:25:02.459: INFO: kube-multus-ds-amd64-dtt2x from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 13 22:25:02.459: INFO: Container kube-multus ready: true, restart count 1 May 13 22:25:02.459: INFO: kube-proxy-rs2zg from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 13 22:25:02.459: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:25:02.459: INFO: kubernetes-dashboard-785dcbb76d-tcgth from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 13 22:25:02.459: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 13 22:25:02.459: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 13 22:25:02.459: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 13 22:25:02.459: INFO: nginx-proxy-node1 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 13 22:25:02.459: INFO: Container nginx-proxy ready: true, restart count 2 May 13 22:25:02.459: INFO: node-feature-discovery-worker-l459c from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 13 22:25:02.459: INFO: Container nfd-worker ready: true, restart count 0 May 13 22:25:02.459: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 13 22:25:02.459: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 22:25:02.459: INFO: collectd-p26j2 from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 13 22:25:02.459: INFO: Container collectd ready: true, restart count 0 May 13 22:25:02.459: INFO: Container collectd-exporter ready: true, restart count 0 May 13 22:25:02.459: INFO: Container rbac-proxy ready: true, restart count 0 May 13 22:25:02.459: INFO: node-exporter-42x8d from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 13 22:25:02.459: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:25:02.459: INFO: Container node-exporter ready: true, restart count 0 May 13 22:25:02.459: INFO: prometheus-k8s-0 from monitoring started at 2022-05-13 20:14:32 +0000 UTC (4 container statuses recorded) May 13 22:25:02.459: INFO: Container config-reloader ready: true, restart count 0 May 13 22:25:02.459: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 13 22:25:02.459: INFO: Container grafana ready: true, restart count 0 May 13 22:25:02.459: INFO: Container prometheus ready: true, restart count 1 May 13 22:25:02.459: INFO: Logging pods the apiserver thinks is on node node2 before test May 13 22:25:02.478: INFO: cmk-init-discover-node2-hm7r7 from kube-system started at 2022-05-13 20:12:52 +0000 UTC (3 container statuses recorded) May 13 22:25:02.478: INFO: Container discover ready: false, restart count 0 May 13 22:25:02.478: INFO: Container init ready: false, restart count 0 May 13 22:25:02.478: INFO: Container install ready: false, restart count 0 May 13 22:25:02.478: INFO: cmk-qhbd6 from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 13 22:25:02.478: INFO: Container nodereport ready: true, restart count 0 May 13 22:25:02.478: INFO: Container reconcile ready: true, restart count 0 May 13 22:25:02.478: INFO: kube-flannel-lv9xf from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 13 22:25:02.478: INFO: Container kube-flannel ready: true, restart count 2 May 13 22:25:02.478: INFO: kube-multus-ds-amd64-l7nx2 from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 13 22:25:02.478: INFO: Container kube-multus ready: true, restart count 1 May 13 22:25:02.478: INFO: kube-proxy-wkzbm from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 13 22:25:02.478: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:25:02.478: INFO: nginx-proxy-node2 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 13 22:25:02.478: INFO: Container nginx-proxy ready: true, restart count 2 May 13 22:25:02.478: INFO: node-feature-discovery-worker-cxxqf from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 13 22:25:02.478: INFO: Container nfd-worker ready: true, restart count 0 May 13 22:25:02.478: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 13 22:25:02.478: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 22:25:02.478: INFO: collectd-9gqhr from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 13 22:25:02.478: INFO: Container collectd ready: true, restart count 0 May 13 22:25:02.478: INFO: Container collectd-exporter ready: true, restart count 0 May 13 22:25:02.478: INFO: Container rbac-proxy ready: true, restart count 0 May 13 22:25:02.478: INFO: node-exporter-n5snd from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 13 22:25:02.478: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:25:02.478: INFO: Container node-exporter ready: true, restart count 0 May 13 22:25:02.478: INFO: prometheus-operator-585ccfb458-vrwnp from monitoring started at 2022-05-13 20:14:11 +0000 UTC (2 container statuses recorded) May 13 22:25:02.478: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:25:02.478: INFO: Container prometheus-operator ready: true, restart count 0 May 13 22:25:02.478: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 from monitoring started at 2022-05-13 20:17:23 +0000 UTC (1 container statuses recorded) May 13 22:25:02.478: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16eeca71977708b9], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:25:03.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2464" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":17,"completed":12,"skipped":3523,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:25:03.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 13 22:25:03.557: INFO: Waiting up to 1m0s for all nodes to be ready May 13 22:26:03.613: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:26:03.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 13 22:26:07.675: INFO: found a healthy node: node1 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:26:25.732: INFO: pods created so far: [1 1 1] May 13 22:26:25.732: INFO: length of pods created so far: 3 May 13 22:26:37.747: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:26:44.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-6406" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:26:44.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7376" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:101.304 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":17,"completed":13,"skipped":3606,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:26:44.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 13 22:26:44.864: INFO: Waiting up to 1m0s for all nodes to be ready May 13 22:27:44.921: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:27:44.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:27:44.957: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. May 13 22:27:44.960: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:27:44.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-1995" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:27:44.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3445" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.191 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":17,"completed":14,"skipped":3759,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:27:45.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 13 22:27:45.057: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 13 22:27:45.066: INFO: Waiting for terminating namespaces to be deleted... May 13 22:27:45.068: INFO: Logging pods the apiserver thinks is on node node1 before test May 13 22:27:45.078: INFO: cmk-init-discover-node1-m2p59 from kube-system started at 2022-05-13 20:12:33 +0000 UTC (3 container statuses recorded) May 13 22:27:45.078: INFO: Container discover ready: false, restart count 0 May 13 22:27:45.078: INFO: Container init ready: false, restart count 0 May 13 22:27:45.078: INFO: Container install ready: false, restart count 0 May 13 22:27:45.078: INFO: cmk-tfblh from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 13 22:27:45.078: INFO: Container nodereport ready: true, restart count 0 May 13 22:27:45.078: INFO: Container reconcile ready: true, restart count 0 May 13 22:27:45.078: INFO: cmk-webhook-6c9d5f8578-59hj6 from kube-system started at 2022-05-13 20:13:16 +0000 UTC (1 container statuses recorded) May 13 22:27:45.078: INFO: Container cmk-webhook ready: true, restart count 0 May 13 22:27:45.078: INFO: kube-flannel-xfj7m from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 13 22:27:45.078: INFO: Container kube-flannel ready: true, restart count 2 May 13 22:27:45.078: INFO: kube-multus-ds-amd64-dtt2x from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 13 22:27:45.078: INFO: Container kube-multus ready: true, restart count 1 May 13 22:27:45.078: INFO: kube-proxy-rs2zg from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 13 22:27:45.078: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:27:45.078: INFO: kubernetes-dashboard-785dcbb76d-tcgth from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 13 22:27:45.078: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 13 22:27:45.078: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 13 22:27:45.078: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 13 22:27:45.078: INFO: nginx-proxy-node1 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 13 22:27:45.078: INFO: Container nginx-proxy ready: true, restart count 2 May 13 22:27:45.078: INFO: node-feature-discovery-worker-l459c from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 13 22:27:45.078: INFO: Container nfd-worker ready: true, restart count 0 May 13 22:27:45.078: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 13 22:27:45.078: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 22:27:45.078: INFO: collectd-p26j2 from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 13 22:27:45.078: INFO: Container collectd ready: true, restart count 0 May 13 22:27:45.078: INFO: Container collectd-exporter ready: true, restart count 0 May 13 22:27:45.078: INFO: Container rbac-proxy ready: true, restart count 0 May 13 22:27:45.078: INFO: node-exporter-42x8d from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 13 22:27:45.078: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:27:45.078: INFO: Container node-exporter ready: true, restart count 0 May 13 22:27:45.078: INFO: prometheus-k8s-0 from monitoring started at 2022-05-13 20:14:32 +0000 UTC (4 container statuses recorded) May 13 22:27:45.078: INFO: Container config-reloader ready: true, restart count 0 May 13 22:27:45.078: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 13 22:27:45.078: INFO: Container grafana ready: true, restart count 0 May 13 22:27:45.078: INFO: Container prometheus ready: true, restart count 1 May 13 22:27:45.078: INFO: Logging pods the apiserver thinks is on node node2 before test May 13 22:27:45.085: INFO: cmk-init-discover-node2-hm7r7 from kube-system started at 2022-05-13 20:12:52 +0000 UTC (3 container statuses recorded) May 13 22:27:45.085: INFO: Container discover ready: false, restart count 0 May 13 22:27:45.085: INFO: Container init ready: false, restart count 0 May 13 22:27:45.085: INFO: Container install ready: false, restart count 0 May 13 22:27:45.085: INFO: cmk-qhbd6 from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 13 22:27:45.085: INFO: Container nodereport ready: true, restart count 0 May 13 22:27:45.085: INFO: Container reconcile ready: true, restart count 0 May 13 22:27:45.085: INFO: kube-flannel-lv9xf from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 13 22:27:45.085: INFO: Container kube-flannel ready: true, restart count 2 May 13 22:27:45.085: INFO: kube-multus-ds-amd64-l7nx2 from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 13 22:27:45.085: INFO: Container kube-multus ready: true, restart count 1 May 13 22:27:45.085: INFO: kube-proxy-wkzbm from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 13 22:27:45.085: INFO: Container kube-proxy ready: true, restart count 2 May 13 22:27:45.085: INFO: nginx-proxy-node2 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 13 22:27:45.085: INFO: Container nginx-proxy ready: true, restart count 2 May 13 22:27:45.085: INFO: node-feature-discovery-worker-cxxqf from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 13 22:27:45.085: INFO: Container nfd-worker ready: true, restart count 0 May 13 22:27:45.085: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 13 22:27:45.085: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 22:27:45.085: INFO: collectd-9gqhr from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 13 22:27:45.085: INFO: Container collectd ready: true, restart count 0 May 13 22:27:45.085: INFO: Container collectd-exporter ready: true, restart count 0 May 13 22:27:45.085: INFO: Container rbac-proxy ready: true, restart count 0 May 13 22:27:45.085: INFO: node-exporter-n5snd from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 13 22:27:45.085: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:27:45.085: INFO: Container node-exporter ready: true, restart count 0 May 13 22:27:45.085: INFO: prometheus-operator-585ccfb458-vrwnp from monitoring started at 2022-05-13 20:14:11 +0000 UTC (2 container statuses recorded) May 13 22:27:45.085: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 22:27:45.085: INFO: Container prometheus-operator ready: true, restart count 0 May 13 22:27:45.085: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 from monitoring started at 2022-05-13 20:17:23 +0000 UTC (1 container statuses recorded) May 13 22:27:45.085: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-166a8303-8424-4909-bf2c-02d02888b085 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-166a8303-8424-4909-bf2c-02d02888b085 off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-166a8303-8424-4909-bf2c-02d02888b085 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:27:53.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4430" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.131 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":17,"completed":15,"skipped":4687,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:27:53.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 13 22:27:53.483: INFO: Pod name wrapped-volume-race-3a164490-858c-4b80-8ff2-9f119416bab4: Found 3 pods out of 5 May 13 22:27:58.490: INFO: Pod name wrapped-volume-race-3a164490-858c-4b80-8ff2-9f119416bab4: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3a164490-858c-4b80-8ff2-9f119416bab4 in namespace emptydir-wrapper-9690, will wait for the garbage collector to delete the pods May 13 22:28:12.575: INFO: Deleting ReplicationController wrapped-volume-race-3a164490-858c-4b80-8ff2-9f119416bab4 took: 7.271267ms May 13 22:28:12.675: INFO: Terminating ReplicationController wrapped-volume-race-3a164490-858c-4b80-8ff2-9f119416bab4 pods took: 100.189046ms STEP: Creating RC which spawns configmap-volume pods May 13 22:28:22.392: INFO: Pod name wrapped-volume-race-5509e334-b49a-45f5-aa33-2cfce2fed5d8: Found 0 pods out of 5 May 13 22:28:27.399: INFO: Pod name wrapped-volume-race-5509e334-b49a-45f5-aa33-2cfce2fed5d8: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5509e334-b49a-45f5-aa33-2cfce2fed5d8 in namespace emptydir-wrapper-9690, will wait for the garbage collector to delete the pods May 13 22:28:43.482: INFO: Deleting ReplicationController wrapped-volume-race-5509e334-b49a-45f5-aa33-2cfce2fed5d8 took: 5.138503ms May 13 22:28:43.583: INFO: Terminating ReplicationController wrapped-volume-race-5509e334-b49a-45f5-aa33-2cfce2fed5d8 pods took: 101.079248ms STEP: Creating RC which spawns configmap-volume pods May 13 22:28:52.502: INFO: Pod name wrapped-volume-race-182b2f21-32fc-4e56-bf4b-61824d93e262: Found 0 pods out of 5 May 13 22:28:57.510: INFO: Pod name wrapped-volume-race-182b2f21-32fc-4e56-bf4b-61824d93e262: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-182b2f21-32fc-4e56-bf4b-61824d93e262 in namespace emptydir-wrapper-9690, will wait for the garbage collector to delete the pods May 13 22:29:11.593: INFO: Deleting ReplicationController wrapped-volume-race-182b2f21-32fc-4e56-bf4b-61824d93e262 took: 6.316798ms May 13 22:29:11.694: INFO: Terminating ReplicationController wrapped-volume-race-182b2f21-32fc-4e56-bf4b-61824d93e262 pods took: 101.220179ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:29:22.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9690" for this suite. • [SLOW TEST:89.522 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":17,"completed":16,"skipped":5118,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 22:29:22.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 13 22:29:22.748: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 13 22:29:22.754: INFO: Number of nodes with available pods: 0 May 13 22:29:22.754: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 13 22:29:22.769: INFO: Number of nodes with available pods: 0 May 13 22:29:22.769: INFO: Node node2 is running more than one daemon pod May 13 22:29:23.773: INFO: Number of nodes with available pods: 0 May 13 22:29:23.773: INFO: Node node2 is running more than one daemon pod May 13 22:29:24.773: INFO: Number of nodes with available pods: 0 May 13 22:29:24.773: INFO: Node node2 is running more than one daemon pod May 13 22:29:25.772: INFO: Number of nodes with available pods: 1 May 13 22:29:25.772: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 13 22:29:25.787: INFO: Number of nodes with available pods: 1 May 13 22:29:25.787: INFO: Number of running nodes: 0, number of available pods: 1 May 13 22:29:26.792: INFO: Number of nodes with available pods: 0 May 13 22:29:26.792: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 13 22:29:26.799: INFO: Number of nodes with available pods: 0 May 13 22:29:26.799: INFO: Node node2 is running more than one daemon pod May 13 22:29:27.805: INFO: Number of nodes with available pods: 0 May 13 22:29:27.805: INFO: Node node2 is running more than one daemon pod May 13 22:29:28.804: INFO: Number of nodes with available pods: 0 May 13 22:29:28.804: INFO: Node node2 is running more than one daemon pod May 13 22:29:29.803: INFO: Number of nodes with available pods: 0 May 13 22:29:29.803: INFO: Node node2 is running more than one daemon pod May 13 22:29:30.803: INFO: Number of nodes with available pods: 0 May 13 22:29:30.803: INFO: Node node2 is running more than one daemon pod May 13 22:29:31.803: INFO: Number of nodes with available pods: 0 May 13 22:29:31.803: INFO: Node node2 is running more than one daemon pod May 13 22:29:32.803: INFO: Number of nodes with available pods: 0 May 13 22:29:32.803: INFO: Node node2 is running more than one daemon pod May 13 22:29:33.803: INFO: Number of nodes with available pods: 0 May 13 22:29:33.803: INFO: Node node2 is running more than one daemon pod May 13 22:29:34.803: INFO: Number of nodes with available pods: 0 May 13 22:29:34.803: INFO: Node node2 is running more than one daemon pod May 13 22:29:35.803: INFO: Number of nodes with available pods: 0 May 13 22:29:35.803: INFO: Node node2 is running more than one daemon pod May 13 22:29:36.803: INFO: Number of nodes with available pods: 0 May 13 22:29:36.803: INFO: Node node2 is running more than one daemon pod May 13 22:29:37.802: INFO: Number of nodes with available pods: 0 May 13 22:29:37.802: INFO: Node node2 is running more than one daemon pod May 13 22:29:38.805: INFO: Number of nodes with available pods: 0 May 13 22:29:38.805: INFO: Node node2 is running more than one daemon pod May 13 22:29:39.803: INFO: Number of nodes with available pods: 0 May 13 22:29:39.803: INFO: Node node2 is running more than one daemon pod May 13 22:29:40.803: INFO: Number of nodes with available pods: 0 May 13 22:29:40.803: INFO: Node node2 is running more than one daemon pod May 13 22:29:41.804: INFO: Number of nodes with available pods: 0 May 13 22:29:41.804: INFO: Node node2 is running more than one daemon pod May 13 22:29:42.804: INFO: Number of nodes with available pods: 0 May 13 22:29:42.804: INFO: Node node2 is running more than one daemon pod May 13 22:29:43.803: INFO: Number of nodes with available pods: 0 May 13 22:29:43.803: INFO: Node node2 is running more than one daemon pod May 13 22:29:44.803: INFO: Number of nodes with available pods: 0 May 13 22:29:44.803: INFO: Node node2 is running more than one daemon pod May 13 22:29:45.803: INFO: Number of nodes with available pods: 1 May 13 22:29:45.803: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2584, will wait for the garbage collector to delete the pods May 13 22:29:45.866: INFO: Deleting DaemonSet.extensions daemon-set took: 4.671431ms May 13 22:29:45.966: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.630363ms May 13 22:29:52.369: INFO: Number of nodes with available pods: 0 May 13 22:29:52.369: INFO: Number of running nodes: 0, number of available pods: 0 May 13 22:29:52.372: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"55886"},"items":null} May 13 22:29:52.375: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"55886"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 22:29:52.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2584" for this suite. • [SLOW TEST:29.697 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":17,"completed":17,"skipped":5450,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 13 22:29:52.405: INFO: Running AfterSuite actions on all nodes May 13 22:29:52.405: INFO: Running AfterSuite actions on node 1 May 13 22:29:52.405: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":17,"skipped":5756,"failed":0} Ran 17 of 5773 Specs in 890.844 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5756 Skipped PASS Ginkgo ran 1 suite in 14m52.280755199s Test Suite Passed