I0603 22:09:46.894916 24 e2e.go:129] Starting e2e run "8cd4aaf9-aea9-4975-8871-4a84212b2871" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1654294185 - Will randomize all specs Will run 17 of 5773 specs Jun 3 22:09:46.957: INFO: >>> kubeConfig: /root/.kube/config Jun 3 22:09:46.962: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 3 22:09:46.991: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 3 22:09:47.059: INFO: The status of Pod cmk-init-discover-node1-n75dv is Succeeded, skipping waiting Jun 3 22:09:47.059: INFO: The status of Pod cmk-init-discover-node2-xvf8p is Succeeded, skipping waiting Jun 3 22:09:47.059: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 3 22:09:47.059: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 3 22:09:47.059: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 3 22:09:47.078: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Jun 3 22:09:47.078: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Jun 3 22:09:47.078: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Jun 3 22:09:47.078: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Jun 3 22:09:47.078: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Jun 3 22:09:47.078: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Jun 3 22:09:47.078: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Jun 3 22:09:47.078: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 3 22:09:47.078: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Jun 3 22:09:47.078: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Jun 3 22:09:47.078: INFO: e2e test version: v1.21.9 Jun 3 22:09:47.079: INFO: kube-apiserver version: v1.21.1 Jun 3 22:09:47.079: INFO: >>> kubeConfig: /root/.kube/config Jun 3 22:09:47.086: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:09:47.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces W0603 22:09:47.110814 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 22:09:47.111: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 22:09:47.114: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:09:47.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7867" for this suite. STEP: Destroying namespace "nspatchtest-32a4655e-5883-4a08-af9d-a466b39635d5-916" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":17,"completed":1,"skipped":58,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:09:47.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 3 22:09:47.196: INFO: Waiting up to 1m0s for all nodes to be ready Jun 3 22:10:47.249: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Jun 3 22:10:47.275: INFO: Created pod: pod0-sched-preemption-low-priority Jun 3 22:10:47.295: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:11:17.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5614" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:90.217 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":17,"completed":2,"skipped":335,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:11:17.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 3 22:11:17.429: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:17.429: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:17.429: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:17.431: INFO: Number of nodes with available pods: 0 Jun 3 22:11:17.431: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:18.437: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:18.437: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:18.437: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:18.440: INFO: Number of nodes with available pods: 0 Jun 3 22:11:18.440: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:19.439: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:19.439: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:19.439: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:19.441: INFO: Number of nodes with available pods: 0 Jun 3 22:11:19.441: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:20.437: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:20.437: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:20.437: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:20.440: INFO: Number of nodes with available pods: 1 Jun 3 22:11:20.440: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:21.437: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:21.437: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:21.437: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:21.440: INFO: Number of nodes with available pods: 2 Jun 3 22:11:21.440: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 3 22:11:21.454: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:21.454: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:21.454: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:21.457: INFO: Number of nodes with available pods: 1 Jun 3 22:11:21.457: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:22.465: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:22.465: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:22.465: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:22.468: INFO: Number of nodes with available pods: 1 Jun 3 22:11:22.468: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:23.464: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:23.464: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:23.464: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:23.468: INFO: Number of nodes with available pods: 1 Jun 3 22:11:23.468: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:24.464: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:24.465: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:24.465: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:24.467: INFO: Number of nodes with available pods: 1 Jun 3 22:11:24.467: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:25.464: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:25.464: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:25.464: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:25.467: INFO: Number of nodes with available pods: 1 Jun 3 22:11:25.467: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:26.464: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:26.464: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:26.464: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:26.467: INFO: Number of nodes with available pods: 1 Jun 3 22:11:26.467: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:27.464: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:27.464: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:27.464: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:27.467: INFO: Number of nodes with available pods: 1 Jun 3 22:11:27.467: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:28.462: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:28.462: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:28.462: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:28.465: INFO: Number of nodes with available pods: 1 Jun 3 22:11:28.465: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:29.463: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:29.463: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:29.463: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:29.466: INFO: Number of nodes with available pods: 1 Jun 3 22:11:29.466: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:30.465: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:30.465: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:30.465: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:30.468: INFO: Number of nodes with available pods: 1 Jun 3 22:11:30.468: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:31.462: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:31.462: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:31.462: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:31.465: INFO: Number of nodes with available pods: 1 Jun 3 22:11:31.465: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:32.464: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:32.464: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:32.464: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:32.467: INFO: Number of nodes with available pods: 1 Jun 3 22:11:32.467: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:33.463: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:33.463: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:33.463: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:33.466: INFO: Number of nodes with available pods: 1 Jun 3 22:11:33.466: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:34.465: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:34.465: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:34.465: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:34.468: INFO: Number of nodes with available pods: 2 Jun 3 22:11:34.468: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9020, will wait for the garbage collector to delete the pods Jun 3 22:11:34.527: INFO: Deleting DaemonSet.extensions daemon-set took: 3.572957ms Jun 3 22:11:34.627: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.126927ms Jun 3 22:11:42.130: INFO: Number of nodes with available pods: 0 Jun 3 22:11:42.130: INFO: Number of running nodes: 0, number of available pods: 0 Jun 3 22:11:42.136: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"51396"},"items":null} Jun 3 22:11:42.139: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"51396"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:11:42.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9020" for this suite. • [SLOW TEST:24.772 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":17,"completed":3,"skipped":355,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:11:42.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 3 22:11:42.186: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 22:11:42.197: INFO: Waiting for terminating namespaces to be deleted... Jun 3 22:11:42.200: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 3 22:11:42.210: INFO: cmk-84nbw from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 22:11:42.210: INFO: Container nodereport ready: true, restart count 0 Jun 3 22:11:42.210: INFO: Container reconcile ready: true, restart count 0 Jun 3 22:11:42.210: INFO: cmk-init-discover-node1-n75dv from kube-system started at 2022-06-03 20:11:42 +0000 UTC (3 container statuses recorded) Jun 3 22:11:42.210: INFO: Container discover ready: false, restart count 0 Jun 3 22:11:42.210: INFO: Container init ready: false, restart count 0 Jun 3 22:11:42.210: INFO: Container install ready: false, restart count 0 Jun 3 22:11:42.210: INFO: cmk-webhook-6c9d5f8578-c927x from kube-system started at 2022-06-03 20:12:25 +0000 UTC (1 container statuses recorded) Jun 3 22:11:42.210: INFO: Container cmk-webhook ready: true, restart count 0 Jun 3 22:11:42.210: INFO: kube-flannel-hm6bh from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 22:11:42.210: INFO: Container kube-flannel ready: true, restart count 3 Jun 3 22:11:42.210: INFO: kube-multus-ds-amd64-p7r6j from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 22:11:42.210: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:11:42.210: INFO: kube-proxy-b6zlv from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 22:11:42.210: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 22:11:42.210: INFO: nginx-proxy-node1 from kube-system started at 2022-06-03 19:59:31 +0000 UTC (1 container statuses recorded) Jun 3 22:11:42.210: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 22:11:42.210: INFO: node-feature-discovery-worker-rg6tx from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 22:11:42.210: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 22:11:42.210: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 22:11:42.210: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 22:11:42.210: INFO: collectd-nbx5z from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 22:11:42.210: INFO: Container collectd ready: true, restart count 0 Jun 3 22:11:42.210: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 22:11:42.210: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 22:11:42.210: INFO: node-exporter-f5xkq from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 22:11:42.210: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:11:42.210: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:11:42.211: INFO: prometheus-k8s-0 from monitoring started at 2022-06-03 20:13:45 +0000 UTC (4 container statuses recorded) Jun 3 22:11:42.211: INFO: Container config-reloader ready: true, restart count 0 Jun 3 22:11:42.211: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 3 22:11:42.211: INFO: Container grafana ready: true, restart count 0 Jun 3 22:11:42.211: INFO: Container prometheus ready: true, restart count 1 Jun 3 22:11:42.211: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 3 22:11:42.219: INFO: cmk-init-discover-node2-xvf8p from kube-system started at 2022-06-03 20:12:02 +0000 UTC (3 container statuses recorded) Jun 3 22:11:42.219: INFO: Container discover ready: false, restart count 0 Jun 3 22:11:42.219: INFO: Container init ready: false, restart count 0 Jun 3 22:11:42.219: INFO: Container install ready: false, restart count 0 Jun 3 22:11:42.219: INFO: cmk-v446x from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 22:11:42.219: INFO: Container nodereport ready: true, restart count 0 Jun 3 22:11:42.219: INFO: Container reconcile ready: true, restart count 0 Jun 3 22:11:42.219: INFO: kube-flannel-pc7wj from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 22:11:42.219: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 22:11:42.219: INFO: kube-multus-ds-amd64-n7spl from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 22:11:42.219: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:11:42.219: INFO: kube-proxy-qmkcq from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 22:11:42.219: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 22:11:42.219: INFO: kubernetes-dashboard-785dcbb76d-25c95 from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 22:11:42.219: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 3 22:11:42.219: INFO: kubernetes-metrics-scraper-5558854cb-fz4kn from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 22:11:42.219: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 3 22:11:42.219: INFO: nginx-proxy-node2 from kube-system started at 2022-06-03 19:59:32 +0000 UTC (1 container statuses recorded) Jun 3 22:11:42.219: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 22:11:42.219: INFO: node-feature-discovery-worker-gn855 from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 22:11:42.219: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 22:11:42.219: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 22:11:42.219: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 22:11:42.219: INFO: collectd-q2l4t from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 22:11:42.219: INFO: Container collectd ready: true, restart count 0 Jun 3 22:11:42.219: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 22:11:42.219: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 22:11:42.219: INFO: node-exporter-g45bm from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 22:11:42.220: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:11:42.220: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:11:42.220: INFO: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 from monitoring started at 2022-06-03 20:16:39 +0000 UTC (1 container statuses recorded) Jun 3 22:11:42.220: INFO: Container tas-extender ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: verifying the node has the label node node1 STEP: verifying the node has the label node node2 Jun 3 22:11:42.272: INFO: Pod cmk-84nbw requesting resource cpu=0m on Node node1 Jun 3 22:11:42.272: INFO: Pod cmk-v446x requesting resource cpu=0m on Node node2 Jun 3 22:11:42.272: INFO: Pod cmk-webhook-6c9d5f8578-c927x requesting resource cpu=0m on Node node1 Jun 3 22:11:42.272: INFO: Pod kube-flannel-hm6bh requesting resource cpu=150m on Node node1 Jun 3 22:11:42.272: INFO: Pod kube-flannel-pc7wj requesting resource cpu=150m on Node node2 Jun 3 22:11:42.272: INFO: Pod kube-multus-ds-amd64-n7spl requesting resource cpu=100m on Node node2 Jun 3 22:11:42.272: INFO: Pod kube-multus-ds-amd64-p7r6j requesting resource cpu=100m on Node node1 Jun 3 22:11:42.272: INFO: Pod kube-proxy-b6zlv requesting resource cpu=0m on Node node1 Jun 3 22:11:42.272: INFO: Pod kube-proxy-qmkcq requesting resource cpu=0m on Node node2 Jun 3 22:11:42.272: INFO: Pod kubernetes-dashboard-785dcbb76d-25c95 requesting resource cpu=50m on Node node2 Jun 3 22:11:42.272: INFO: Pod kubernetes-metrics-scraper-5558854cb-fz4kn requesting resource cpu=0m on Node node2 Jun 3 22:11:42.272: INFO: Pod nginx-proxy-node1 requesting resource cpu=25m on Node node1 Jun 3 22:11:42.272: INFO: Pod nginx-proxy-node2 requesting resource cpu=25m on Node node2 Jun 3 22:11:42.272: INFO: Pod node-feature-discovery-worker-gn855 requesting resource cpu=0m on Node node2 Jun 3 22:11:42.272: INFO: Pod node-feature-discovery-worker-rg6tx requesting resource cpu=0m on Node node1 Jun 3 22:11:42.272: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt requesting resource cpu=0m on Node node2 Jun 3 22:11:42.272: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx requesting resource cpu=0m on Node node1 Jun 3 22:11:42.272: INFO: Pod collectd-nbx5z requesting resource cpu=0m on Node node1 Jun 3 22:11:42.272: INFO: Pod collectd-q2l4t requesting resource cpu=0m on Node node2 Jun 3 22:11:42.272: INFO: Pod node-exporter-f5xkq requesting resource cpu=112m on Node node1 Jun 3 22:11:42.272: INFO: Pod node-exporter-g45bm requesting resource cpu=112m on Node node2 Jun 3 22:11:42.272: INFO: Pod prometheus-k8s-0 requesting resource cpu=200m on Node node1 Jun 3 22:11:42.272: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 requesting resource cpu=0m on Node node2 STEP: Starting Pods to consume most of the cluster CPU. Jun 3 22:11:42.272: INFO: Creating a pod which consumes cpu=53489m on Node node1 Jun 3 22:11:42.282: INFO: Creating a pod which consumes cpu=53594m on Node node2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-7ff35cfc-7603-447d-8ddf-d54261fe01e6.16f53be732850d11], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1189/filler-pod-7ff35cfc-7603-447d-8ddf-d54261fe01e6 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-7ff35cfc-7603-447d-8ddf-d54261fe01e6.16f53be78651ec9d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-7ff35cfc-7603-447d-8ddf-d54261fe01e6.16f53be79c571f82], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 369.429616ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-7ff35cfc-7603-447d-8ddf-d54261fe01e6.16f53be7a28f8771], Reason = [Created], Message = [Created container filler-pod-7ff35cfc-7603-447d-8ddf-d54261fe01e6] STEP: Considering event: Type = [Normal], Name = [filler-pod-7ff35cfc-7603-447d-8ddf-d54261fe01e6.16f53be7a8e3cf56], Reason = [Started], Message = [Started container filler-pod-7ff35cfc-7603-447d-8ddf-d54261fe01e6] STEP: Considering event: Type = [Normal], Name = [filler-pod-dc8cdfbe-00b0-41d2-b7e3-9223677ee376.16f53be7320113d4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1189/filler-pod-dc8cdfbe-00b0-41d2-b7e3-9223677ee376 to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-dc8cdfbe-00b0-41d2-b7e3-9223677ee376.16f53be7858827dc], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-dc8cdfbe-00b0-41d2-b7e3-9223677ee376.16f53be797a855c5], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 304.091209ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-dc8cdfbe-00b0-41d2-b7e3-9223677ee376.16f53be79e4ebf74], Reason = [Created], Message = [Created container filler-pod-dc8cdfbe-00b0-41d2-b7e3-9223677ee376] STEP: Considering event: Type = [Normal], Name = [filler-pod-dc8cdfbe-00b0-41d2-b7e3-9223677ee376.16f53be7a4e8f4a7], Reason = [Started], Message = [Started container filler-pod-dc8cdfbe-00b0-41d2-b7e3-9223677ee376] STEP: Considering event: Type = [Warning], Name = [additional-pod.16f53be8222f2d06], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: removing the label node off the node node1 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node node2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:11:47.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1189" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:5.200 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":17,"completed":4,"skipped":545,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:11:47.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 3 22:11:47.406: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:47.406: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:47.406: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:47.408: INFO: Number of nodes with available pods: 0 Jun 3 22:11:47.408: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:48.414: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:48.414: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:48.414: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:48.417: INFO: Number of nodes with available pods: 0 Jun 3 22:11:48.417: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:49.414: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:49.414: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:49.414: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:49.417: INFO: Number of nodes with available pods: 0 Jun 3 22:11:49.417: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:50.416: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:50.416: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:50.416: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:50.419: INFO: Number of nodes with available pods: 2 Jun 3 22:11:50.419: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 3 22:11:50.439: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:50.439: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:50.439: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:50.441: INFO: Number of nodes with available pods: 1 Jun 3 22:11:50.441: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:51.449: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:51.449: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:51.449: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:51.452: INFO: Number of nodes with available pods: 1 Jun 3 22:11:51.452: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:52.447: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:52.447: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:52.447: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:52.450: INFO: Number of nodes with available pods: 1 Jun 3 22:11:52.450: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:53.446: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:53.447: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:53.447: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:53.450: INFO: Number of nodes with available pods: 1 Jun 3 22:11:53.450: INFO: Node node1 is running more than one daemon pod Jun 3 22:11:54.449: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:54.449: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:54.449: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:11:54.452: INFO: Number of nodes with available pods: 2 Jun 3 22:11:54.452: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1469, will wait for the garbage collector to delete the pods Jun 3 22:11:54.515: INFO: Deleting DaemonSet.extensions daemon-set took: 5.708305ms Jun 3 22:11:54.617: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.249494ms Jun 3 22:12:00.220: INFO: Number of nodes with available pods: 0 Jun 3 22:12:00.220: INFO: Number of running nodes: 0, number of available pods: 0 Jun 3 22:12:00.223: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"51597"},"items":null} Jun 3 22:12:00.226: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"51597"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:12:00.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1469" for this suite. • [SLOW TEST:12.881 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":17,"completed":5,"skipped":887,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:12:00.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 3 22:12:00.278: INFO: Waiting up to 1m0s for all nodes to be ready Jun 3 22:13:00.335: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:13:00.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:13:00.370: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Jun 3 22:13:00.372: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:13:00.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-4788" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:13:00.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3195" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.204 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":17,"completed":6,"skipped":1079,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:13:00.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:13:06.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5957" for this suite. STEP: Destroying namespace "nsdeletetest-3408" for this suite. Jun 3 22:13:06.545: INFO: Namespace nsdeletetest-3408 was already deleted STEP: Destroying namespace "nsdeletetest-6920" for this suite. • [SLOW TEST:6.089 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":17,"completed":7,"skipped":1660,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:13:06.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 3 22:13:06.594: INFO: Waiting up to 1m0s for all nodes to be ready Jun 3 22:14:06.647: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Jun 3 22:14:06.672: INFO: Created pod: pod0-sched-preemption-low-priority Jun 3 22:14:06.692: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:14:36.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-4411" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:90.235 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":17,"completed":8,"skipped":1757,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:14:36.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 3 22:14:37.097: INFO: Pod name wrapped-volume-race-d732b331-4b1d-4c51-a294-323adb9ff324: Found 3 pods out of 5 Jun 3 22:14:42.103: INFO: Pod name wrapped-volume-race-d732b331-4b1d-4c51-a294-323adb9ff324: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d732b331-4b1d-4c51-a294-323adb9ff324 in namespace emptydir-wrapper-7767, will wait for the garbage collector to delete the pods Jun 3 22:14:56.182: INFO: Deleting ReplicationController wrapped-volume-race-d732b331-4b1d-4c51-a294-323adb9ff324 took: 5.276977ms Jun 3 22:14:56.283: INFO: Terminating ReplicationController wrapped-volume-race-d732b331-4b1d-4c51-a294-323adb9ff324 pods took: 101.154137ms STEP: Creating RC which spawns configmap-volume pods Jun 3 22:15:10.298: INFO: Pod name wrapped-volume-race-89faeab2-d309-479c-b3dc-a277f999d89f: Found 0 pods out of 5 Jun 3 22:15:15.309: INFO: Pod name wrapped-volume-race-89faeab2-d309-479c-b3dc-a277f999d89f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-89faeab2-d309-479c-b3dc-a277f999d89f in namespace emptydir-wrapper-7767, will wait for the garbage collector to delete the pods Jun 3 22:15:29.391: INFO: Deleting ReplicationController wrapped-volume-race-89faeab2-d309-479c-b3dc-a277f999d89f took: 6.441864ms Jun 3 22:15:29.491: INFO: Terminating ReplicationController wrapped-volume-race-89faeab2-d309-479c-b3dc-a277f999d89f pods took: 100.39555ms STEP: Creating RC which spawns configmap-volume pods Jun 3 22:15:40.309: INFO: Pod name wrapped-volume-race-8f1658df-84e2-4779-a1f1-5b4397da922f: Found 0 pods out of 5 Jun 3 22:15:45.319: INFO: Pod name wrapped-volume-race-8f1658df-84e2-4779-a1f1-5b4397da922f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8f1658df-84e2-4779-a1f1-5b4397da922f in namespace emptydir-wrapper-7767, will wait for the garbage collector to delete the pods Jun 3 22:16:01.400: INFO: Deleting ReplicationController wrapped-volume-race-8f1658df-84e2-4779-a1f1-5b4397da922f took: 7.060225ms Jun 3 22:16:01.500: INFO: Terminating ReplicationController wrapped-volume-race-8f1658df-84e2-4779-a1f1-5b4397da922f pods took: 100.161252ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:16:12.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7767" for this suite. • [SLOW TEST:95.606 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":17,"completed":9,"skipped":1787,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:16:12.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:16:27.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5345" for this suite. STEP: Destroying namespace "nsdeletetest-2218" for this suite. Jun 3 22:16:27.526: INFO: Namespace nsdeletetest-2218 was already deleted STEP: Destroying namespace "nsdeletetest-1410" for this suite. • [SLOW TEST:15.120 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":17,"completed":10,"skipped":2697,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:16:27.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:16:27.574: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 3 22:16:27.583: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:27.583: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:27.583: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:27.586: INFO: Number of nodes with available pods: 0 Jun 3 22:16:27.586: INFO: Node node1 is running more than one daemon pod Jun 3 22:16:28.591: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:28.591: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:28.591: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:28.594: INFO: Number of nodes with available pods: 0 Jun 3 22:16:28.594: INFO: Node node1 is running more than one daemon pod Jun 3 22:16:29.592: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:29.592: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:29.592: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:29.595: INFO: Number of nodes with available pods: 0 Jun 3 22:16:29.595: INFO: Node node1 is running more than one daemon pod Jun 3 22:16:30.596: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:30.596: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:30.596: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:30.599: INFO: Number of nodes with available pods: 0 Jun 3 22:16:30.599: INFO: Node node1 is running more than one daemon pod Jun 3 22:16:31.593: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:31.593: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:31.593: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:31.596: INFO: Number of nodes with available pods: 2 Jun 3 22:16:31.596: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 3 22:16:31.622: INFO: Wrong image for pod: daemon-set-5v59q. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 22:16:31.622: INFO: Wrong image for pod: daemon-set-6f22c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 22:16:31.627: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:31.627: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:31.627: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:32.630: INFO: Wrong image for pod: daemon-set-6f22c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 22:16:32.634: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:32.634: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:32.634: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:33.631: INFO: Wrong image for pod: daemon-set-6f22c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 22:16:33.636: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:33.636: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:33.636: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:34.633: INFO: Wrong image for pod: daemon-set-6f22c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 22:16:34.638: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:34.638: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:34.638: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:35.632: INFO: Wrong image for pod: daemon-set-6f22c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 22:16:35.635: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:35.635: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:35.635: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:36.633: INFO: Wrong image for pod: daemon-set-6f22c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 22:16:36.638: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:36.638: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:36.638: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:37.632: INFO: Wrong image for pod: daemon-set-6f22c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 22:16:37.636: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:37.636: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:37.636: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:38.634: INFO: Wrong image for pod: daemon-set-6f22c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 22:16:38.638: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:38.639: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:38.639: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:39.633: INFO: Wrong image for pod: daemon-set-6f22c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 22:16:39.639: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:39.639: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:39.639: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:40.635: INFO: Wrong image for pod: daemon-set-6f22c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 22:16:40.640: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:40.640: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:40.640: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:41.633: INFO: Wrong image for pod: daemon-set-6f22c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 22:16:41.638: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:41.639: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:41.639: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:42.631: INFO: Wrong image for pod: daemon-set-6f22c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 22:16:42.631: INFO: Pod daemon-set-9f8l5 is not available Jun 3 22:16:42.635: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:42.635: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:42.635: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:43.635: INFO: Wrong image for pod: daemon-set-6f22c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 22:16:43.635: INFO: Pod daemon-set-9f8l5 is not available Jun 3 22:16:43.639: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:43.639: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:43.639: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:44.633: INFO: Wrong image for pod: daemon-set-6f22c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 22:16:44.633: INFO: Pod daemon-set-9f8l5 is not available Jun 3 22:16:44.638: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:44.638: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:44.638: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:45.632: INFO: Wrong image for pod: daemon-set-6f22c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 22:16:45.632: INFO: Pod daemon-set-9f8l5 is not available Jun 3 22:16:45.636: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:45.636: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:45.636: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:46.638: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:46.638: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:46.638: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:47.635: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:47.635: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:47.635: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:48.638: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:48.638: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:48.638: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:49.637: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:49.637: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:49.637: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:50.638: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:50.638: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:50.638: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:51.636: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:51.636: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:51.636: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:52.637: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:52.637: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:52.637: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:53.637: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:53.637: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:53.637: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:54.636: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:54.636: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:54.636: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:55.639: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:55.639: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:55.639: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:56.636: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:56.636: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:56.636: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:57.636: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:57.636: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:57.636: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:58.638: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:58.638: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:58.638: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:59.640: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:59.640: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:16:59.640: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:17:00.633: INFO: Pod daemon-set-khzhv is not available Jun 3 22:17:00.638: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:17:00.638: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:17:00.638: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 3 22:17:00.643: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:17:00.643: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:17:00.643: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:17:00.646: INFO: Number of nodes with available pods: 1 Jun 3 22:17:00.646: INFO: Node node2 is running more than one daemon pod Jun 3 22:17:01.653: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:17:01.653: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:17:01.653: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:17:01.656: INFO: Number of nodes with available pods: 1 Jun 3 22:17:01.656: INFO: Node node2 is running more than one daemon pod Jun 3 22:17:02.653: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:17:02.653: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:17:02.653: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:17:02.656: INFO: Number of nodes with available pods: 1 Jun 3 22:17:02.656: INFO: Node node2 is running more than one daemon pod Jun 3 22:17:03.654: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:17:03.654: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:17:03.654: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:17:03.656: INFO: Number of nodes with available pods: 2 Jun 3 22:17:03.656: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6883, will wait for the garbage collector to delete the pods Jun 3 22:17:03.727: INFO: Deleting DaemonSet.extensions daemon-set took: 4.383105ms Jun 3 22:17:03.828: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.952597ms Jun 3 22:17:12.131: INFO: Number of nodes with available pods: 0 Jun 3 22:17:12.131: INFO: Number of running nodes: 0, number of available pods: 0 Jun 3 22:17:12.133: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"53568"},"items":null} Jun 3 22:17:12.136: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"53568"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:17:12.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6883" for this suite. • [SLOW TEST:44.622 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":17,"completed":11,"skipped":3009,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:17:12.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 3 22:17:12.187: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 22:17:12.195: INFO: Waiting for terminating namespaces to be deleted... Jun 3 22:17:12.198: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 3 22:17:12.204: INFO: cmk-84nbw from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 22:17:12.204: INFO: Container nodereport ready: true, restart count 0 Jun 3 22:17:12.204: INFO: Container reconcile ready: true, restart count 0 Jun 3 22:17:12.204: INFO: cmk-init-discover-node1-n75dv from kube-system started at 2022-06-03 20:11:42 +0000 UTC (3 container statuses recorded) Jun 3 22:17:12.204: INFO: Container discover ready: false, restart count 0 Jun 3 22:17:12.204: INFO: Container init ready: false, restart count 0 Jun 3 22:17:12.205: INFO: Container install ready: false, restart count 0 Jun 3 22:17:12.205: INFO: cmk-webhook-6c9d5f8578-c927x from kube-system started at 2022-06-03 20:12:25 +0000 UTC (1 container statuses recorded) Jun 3 22:17:12.205: INFO: Container cmk-webhook ready: true, restart count 0 Jun 3 22:17:12.205: INFO: kube-flannel-hm6bh from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 22:17:12.205: INFO: Container kube-flannel ready: true, restart count 3 Jun 3 22:17:12.205: INFO: kube-multus-ds-amd64-p7r6j from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 22:17:12.205: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:17:12.205: INFO: kube-proxy-b6zlv from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 22:17:12.205: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 22:17:12.205: INFO: nginx-proxy-node1 from kube-system started at 2022-06-03 19:59:31 +0000 UTC (1 container statuses recorded) Jun 3 22:17:12.205: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 22:17:12.205: INFO: node-feature-discovery-worker-rg6tx from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 22:17:12.205: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 22:17:12.205: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 22:17:12.205: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 22:17:12.205: INFO: collectd-nbx5z from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 22:17:12.205: INFO: Container collectd ready: true, restart count 0 Jun 3 22:17:12.205: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 22:17:12.205: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 22:17:12.205: INFO: node-exporter-f5xkq from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 22:17:12.205: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:17:12.205: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:17:12.205: INFO: prometheus-k8s-0 from monitoring started at 2022-06-03 20:13:45 +0000 UTC (4 container statuses recorded) Jun 3 22:17:12.205: INFO: Container config-reloader ready: true, restart count 0 Jun 3 22:17:12.205: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 3 22:17:12.205: INFO: Container grafana ready: true, restart count 0 Jun 3 22:17:12.205: INFO: Container prometheus ready: true, restart count 1 Jun 3 22:17:12.205: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 3 22:17:12.220: INFO: cmk-init-discover-node2-xvf8p from kube-system started at 2022-06-03 20:12:02 +0000 UTC (3 container statuses recorded) Jun 3 22:17:12.220: INFO: Container discover ready: false, restart count 0 Jun 3 22:17:12.220: INFO: Container init ready: false, restart count 0 Jun 3 22:17:12.220: INFO: Container install ready: false, restart count 0 Jun 3 22:17:12.220: INFO: cmk-v446x from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 22:17:12.221: INFO: Container nodereport ready: true, restart count 0 Jun 3 22:17:12.221: INFO: Container reconcile ready: true, restart count 0 Jun 3 22:17:12.221: INFO: kube-flannel-pc7wj from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 22:17:12.221: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 22:17:12.221: INFO: kube-multus-ds-amd64-n7spl from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 22:17:12.221: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:17:12.221: INFO: kube-proxy-qmkcq from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 22:17:12.221: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 22:17:12.221: INFO: kubernetes-dashboard-785dcbb76d-25c95 from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 22:17:12.221: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 3 22:17:12.221: INFO: kubernetes-metrics-scraper-5558854cb-fz4kn from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 22:17:12.221: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 3 22:17:12.221: INFO: nginx-proxy-node2 from kube-system started at 2022-06-03 19:59:32 +0000 UTC (1 container statuses recorded) Jun 3 22:17:12.221: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 22:17:12.221: INFO: node-feature-discovery-worker-gn855 from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 22:17:12.221: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 22:17:12.221: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 22:17:12.221: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 22:17:12.221: INFO: collectd-q2l4t from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 22:17:12.221: INFO: Container collectd ready: true, restart count 0 Jun 3 22:17:12.221: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 22:17:12.221: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 22:17:12.221: INFO: node-exporter-g45bm from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 22:17:12.221: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:17:12.221: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:17:12.221: INFO: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 from monitoring started at 2022-06-03 20:16:39 +0000 UTC (1 container statuses recorded) Jun 3 22:17:12.221: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16f53c3405de6fae], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match Pod's node affinity/selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:17:13.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3999" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":17,"completed":12,"skipped":3109,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:17:13.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 3 22:17:13.315: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 22:17:13.323: INFO: Waiting for terminating namespaces to be deleted... Jun 3 22:17:13.326: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 3 22:17:13.335: INFO: cmk-84nbw from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 22:17:13.335: INFO: Container nodereport ready: true, restart count 0 Jun 3 22:17:13.335: INFO: Container reconcile ready: true, restart count 0 Jun 3 22:17:13.335: INFO: cmk-init-discover-node1-n75dv from kube-system started at 2022-06-03 20:11:42 +0000 UTC (3 container statuses recorded) Jun 3 22:17:13.335: INFO: Container discover ready: false, restart count 0 Jun 3 22:17:13.335: INFO: Container init ready: false, restart count 0 Jun 3 22:17:13.335: INFO: Container install ready: false, restart count 0 Jun 3 22:17:13.335: INFO: cmk-webhook-6c9d5f8578-c927x from kube-system started at 2022-06-03 20:12:25 +0000 UTC (1 container statuses recorded) Jun 3 22:17:13.335: INFO: Container cmk-webhook ready: true, restart count 0 Jun 3 22:17:13.335: INFO: kube-flannel-hm6bh from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 22:17:13.335: INFO: Container kube-flannel ready: true, restart count 3 Jun 3 22:17:13.335: INFO: kube-multus-ds-amd64-p7r6j from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 22:17:13.335: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:17:13.335: INFO: kube-proxy-b6zlv from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 22:17:13.336: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 22:17:13.336: INFO: nginx-proxy-node1 from kube-system started at 2022-06-03 19:59:31 +0000 UTC (1 container statuses recorded) Jun 3 22:17:13.336: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 22:17:13.336: INFO: node-feature-discovery-worker-rg6tx from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 22:17:13.336: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 22:17:13.336: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 22:17:13.336: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 22:17:13.336: INFO: collectd-nbx5z from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 22:17:13.336: INFO: Container collectd ready: true, restart count 0 Jun 3 22:17:13.336: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 22:17:13.336: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 22:17:13.336: INFO: node-exporter-f5xkq from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 22:17:13.336: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:17:13.336: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:17:13.336: INFO: prometheus-k8s-0 from monitoring started at 2022-06-03 20:13:45 +0000 UTC (4 container statuses recorded) Jun 3 22:17:13.336: INFO: Container config-reloader ready: true, restart count 0 Jun 3 22:17:13.336: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 3 22:17:13.336: INFO: Container grafana ready: true, restart count 0 Jun 3 22:17:13.336: INFO: Container prometheus ready: true, restart count 1 Jun 3 22:17:13.336: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 3 22:17:13.349: INFO: cmk-init-discover-node2-xvf8p from kube-system started at 2022-06-03 20:12:02 +0000 UTC (3 container statuses recorded) Jun 3 22:17:13.349: INFO: Container discover ready: false, restart count 0 Jun 3 22:17:13.349: INFO: Container init ready: false, restart count 0 Jun 3 22:17:13.349: INFO: Container install ready: false, restart count 0 Jun 3 22:17:13.349: INFO: cmk-v446x from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 22:17:13.349: INFO: Container nodereport ready: true, restart count 0 Jun 3 22:17:13.349: INFO: Container reconcile ready: true, restart count 0 Jun 3 22:17:13.349: INFO: kube-flannel-pc7wj from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 22:17:13.349: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 22:17:13.349: INFO: kube-multus-ds-amd64-n7spl from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 22:17:13.349: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:17:13.349: INFO: kube-proxy-qmkcq from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 22:17:13.349: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 22:17:13.349: INFO: kubernetes-dashboard-785dcbb76d-25c95 from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 22:17:13.349: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 3 22:17:13.349: INFO: kubernetes-metrics-scraper-5558854cb-fz4kn from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 22:17:13.349: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 3 22:17:13.349: INFO: nginx-proxy-node2 from kube-system started at 2022-06-03 19:59:32 +0000 UTC (1 container statuses recorded) Jun 3 22:17:13.349: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 22:17:13.349: INFO: node-feature-discovery-worker-gn855 from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 22:17:13.349: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 22:17:13.349: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 22:17:13.349: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 22:17:13.349: INFO: collectd-q2l4t from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 22:17:13.349: INFO: Container collectd ready: true, restart count 0 Jun 3 22:17:13.349: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 22:17:13.349: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 22:17:13.349: INFO: node-exporter-g45bm from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 22:17:13.349: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:17:13.349: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:17:13.349: INFO: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 from monitoring started at 2022-06-03 20:16:39 +0000 UTC (1 container statuses recorded) Jun 3 22:17:13.349: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-6840b01e-8b68-4937-8c48-e205df9fa2b3 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-6840b01e-8b68-4937-8c48-e205df9fa2b3 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-6840b01e-8b68-4937-8c48-e205df9fa2b3 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:17:21.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2873" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.140 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":17,"completed":13,"skipped":4073,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:17:21.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 3 22:17:21.466: INFO: Waiting up to 1m0s for all nodes to be ready Jun 3 22:18:21.517: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:18:21.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Jun 3 22:18:25.583: INFO: found a healthy node: node2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:18:41.641: INFO: pods created so far: [1 1 1] Jun 3 22:18:41.641: INFO: length of pods created so far: 3 Jun 3 22:18:55.655: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:19:02.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-6930" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:19:02.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5908" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:101.303 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":17,"completed":14,"skipped":4180,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:19:02.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:19:02.792: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 3 22:19:02.800: INFO: Number of nodes with available pods: 0 Jun 3 22:19:02.800: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 3 22:19:02.823: INFO: Number of nodes with available pods: 0 Jun 3 22:19:02.823: INFO: Node node1 is running more than one daemon pod Jun 3 22:19:03.827: INFO: Number of nodes with available pods: 0 Jun 3 22:19:03.827: INFO: Node node1 is running more than one daemon pod Jun 3 22:19:04.829: INFO: Number of nodes with available pods: 0 Jun 3 22:19:04.829: INFO: Node node1 is running more than one daemon pod Jun 3 22:19:05.826: INFO: Number of nodes with available pods: 0 Jun 3 22:19:05.826: INFO: Node node1 is running more than one daemon pod Jun 3 22:19:06.830: INFO: Number of nodes with available pods: 1 Jun 3 22:19:06.830: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 3 22:19:06.846: INFO: Number of nodes with available pods: 1 Jun 3 22:19:06.846: INFO: Number of running nodes: 0, number of available pods: 1 Jun 3 22:19:07.849: INFO: Number of nodes with available pods: 0 Jun 3 22:19:07.849: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 3 22:19:07.855: INFO: Number of nodes with available pods: 0 Jun 3 22:19:07.855: INFO: Node node1 is running more than one daemon pod Jun 3 22:19:08.861: INFO: Number of nodes with available pods: 0 Jun 3 22:19:08.861: INFO: Node node1 is running more than one daemon pod Jun 3 22:19:09.858: INFO: Number of nodes with available pods: 0 Jun 3 22:19:09.858: INFO: Node node1 is running more than one daemon pod Jun 3 22:19:10.859: INFO: Number of nodes with available pods: 0 Jun 3 22:19:10.859: INFO: Node node1 is running more than one daemon pod Jun 3 22:19:11.860: INFO: Number of nodes with available pods: 0 Jun 3 22:19:11.860: INFO: Node node1 is running more than one daemon pod Jun 3 22:19:12.858: INFO: Number of nodes with available pods: 1 Jun 3 22:19:12.858: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2942, will wait for the garbage collector to delete the pods Jun 3 22:19:12.919: INFO: Deleting DaemonSet.extensions daemon-set took: 4.38685ms Jun 3 22:19:13.020: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.636275ms Jun 3 22:19:16.725: INFO: Number of nodes with available pods: 0 Jun 3 22:19:16.725: INFO: Number of running nodes: 0, number of available pods: 0 Jun 3 22:19:16.727: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"54235"},"items":null} Jun 3 22:19:16.730: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"54235"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:19:16.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2942" for this suite. • [SLOW TEST:14.004 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":17,"completed":15,"skipped":5086,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:19:16.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 22:19:16.802: INFO: Create a RollingUpdate DaemonSet Jun 3 22:19:16.805: INFO: Check that daemon pods launch on every node of the cluster Jun 3 22:19:16.810: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:16.810: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:16.810: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:16.813: INFO: Number of nodes with available pods: 0 Jun 3 22:19:16.813: INFO: Node node1 is running more than one daemon pod Jun 3 22:19:17.818: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:17.818: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:17.818: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:17.822: INFO: Number of nodes with available pods: 0 Jun 3 22:19:17.822: INFO: Node node1 is running more than one daemon pod Jun 3 22:19:18.820: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:18.820: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:18.820: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:18.823: INFO: Number of nodes with available pods: 0 Jun 3 22:19:18.823: INFO: Node node1 is running more than one daemon pod Jun 3 22:19:19.822: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:19.822: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:19.822: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:19.824: INFO: Number of nodes with available pods: 1 Jun 3 22:19:19.824: INFO: Node node2 is running more than one daemon pod Jun 3 22:19:20.819: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:20.819: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:20.819: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:20.822: INFO: Number of nodes with available pods: 2 Jun 3 22:19:20.822: INFO: Number of running nodes: 2, number of available pods: 2 Jun 3 22:19:20.822: INFO: Update the DaemonSet to trigger a rollout Jun 3 22:19:20.829: INFO: Updating DaemonSet daemon-set Jun 3 22:19:30.843: INFO: Roll back the DaemonSet before rollout is complete Jun 3 22:19:30.850: INFO: Updating DaemonSet daemon-set Jun 3 22:19:30.850: INFO: Make sure DaemonSet rollback is complete Jun 3 22:19:30.853: INFO: Wrong image for pod: daemon-set-9js9d. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Jun 3 22:19:30.853: INFO: Pod daemon-set-9js9d is not available Jun 3 22:19:30.858: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:30.858: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:30.858: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:31.866: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:31.866: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:31.866: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:32.867: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:32.867: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:32.867: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:33.867: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:33.867: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:33.867: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:34.868: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:34.868: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:34.868: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:35.867: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:35.867: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:35.867: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:36.867: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:36.867: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:36.867: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:37.866: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:37.866: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:37.866: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:38.866: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:38.866: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:38.867: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:39.869: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:39.869: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:39.869: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:40.863: INFO: Pod daemon-set-2mksf is not available Jun 3 22:19:40.867: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:40.867: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 22:19:40.868: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2, will wait for the garbage collector to delete the pods Jun 3 22:19:40.930: INFO: Deleting DaemonSet.extensions daemon-set took: 4.299217ms Jun 3 22:19:41.031: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.821997ms Jun 3 22:19:52.134: INFO: Number of nodes with available pods: 0 Jun 3 22:19:52.134: INFO: Number of running nodes: 0, number of available pods: 0 Jun 3 22:19:52.136: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"54441"},"items":null} Jun 3 22:19:52.138: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"54441"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:19:52.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2" for this suite. • [SLOW TEST:35.401 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":17,"completed":16,"skipped":5131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 22:19:52.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 3 22:19:52.189: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 22:19:52.198: INFO: Waiting for terminating namespaces to be deleted... Jun 3 22:19:52.200: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 3 22:19:52.212: INFO: cmk-84nbw from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 22:19:52.212: INFO: Container nodereport ready: true, restart count 0 Jun 3 22:19:52.212: INFO: Container reconcile ready: true, restart count 0 Jun 3 22:19:52.212: INFO: cmk-init-discover-node1-n75dv from kube-system started at 2022-06-03 20:11:42 +0000 UTC (3 container statuses recorded) Jun 3 22:19:52.212: INFO: Container discover ready: false, restart count 0 Jun 3 22:19:52.212: INFO: Container init ready: false, restart count 0 Jun 3 22:19:52.212: INFO: Container install ready: false, restart count 0 Jun 3 22:19:52.212: INFO: cmk-webhook-6c9d5f8578-c927x from kube-system started at 2022-06-03 20:12:25 +0000 UTC (1 container statuses recorded) Jun 3 22:19:52.212: INFO: Container cmk-webhook ready: true, restart count 0 Jun 3 22:19:52.212: INFO: kube-flannel-hm6bh from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 22:19:52.212: INFO: Container kube-flannel ready: true, restart count 3 Jun 3 22:19:52.212: INFO: kube-multus-ds-amd64-p7r6j from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 22:19:52.212: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:19:52.212: INFO: kube-proxy-b6zlv from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 22:19:52.212: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 22:19:52.212: INFO: nginx-proxy-node1 from kube-system started at 2022-06-03 19:59:31 +0000 UTC (1 container statuses recorded) Jun 3 22:19:52.212: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 22:19:52.212: INFO: node-feature-discovery-worker-rg6tx from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 22:19:52.212: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 22:19:52.212: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 22:19:52.212: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 22:19:52.212: INFO: collectd-nbx5z from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 22:19:52.212: INFO: Container collectd ready: true, restart count 0 Jun 3 22:19:52.212: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 22:19:52.212: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 22:19:52.212: INFO: node-exporter-f5xkq from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 22:19:52.212: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:19:52.212: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:19:52.212: INFO: prometheus-k8s-0 from monitoring started at 2022-06-03 20:13:45 +0000 UTC (4 container statuses recorded) Jun 3 22:19:52.212: INFO: Container config-reloader ready: true, restart count 0 Jun 3 22:19:52.212: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 3 22:19:52.212: INFO: Container grafana ready: true, restart count 0 Jun 3 22:19:52.212: INFO: Container prometheus ready: true, restart count 1 Jun 3 22:19:52.212: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 3 22:19:52.221: INFO: cmk-init-discover-node2-xvf8p from kube-system started at 2022-06-03 20:12:02 +0000 UTC (3 container statuses recorded) Jun 3 22:19:52.221: INFO: Container discover ready: false, restart count 0 Jun 3 22:19:52.221: INFO: Container init ready: false, restart count 0 Jun 3 22:19:52.221: INFO: Container install ready: false, restart count 0 Jun 3 22:19:52.221: INFO: cmk-v446x from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 22:19:52.221: INFO: Container nodereport ready: true, restart count 0 Jun 3 22:19:52.221: INFO: Container reconcile ready: true, restart count 0 Jun 3 22:19:52.221: INFO: kube-flannel-pc7wj from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 22:19:52.221: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 22:19:52.221: INFO: kube-multus-ds-amd64-n7spl from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 22:19:52.221: INFO: Container kube-multus ready: true, restart count 1 Jun 3 22:19:52.221: INFO: kube-proxy-qmkcq from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 22:19:52.221: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 22:19:52.221: INFO: kubernetes-dashboard-785dcbb76d-25c95 from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 22:19:52.221: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 3 22:19:52.221: INFO: kubernetes-metrics-scraper-5558854cb-fz4kn from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 22:19:52.221: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 3 22:19:52.221: INFO: nginx-proxy-node2 from kube-system started at 2022-06-03 19:59:32 +0000 UTC (1 container statuses recorded) Jun 3 22:19:52.221: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 22:19:52.221: INFO: node-feature-discovery-worker-gn855 from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 22:19:52.221: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 22:19:52.221: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 22:19:52.221: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 22:19:52.221: INFO: collectd-q2l4t from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 22:19:52.221: INFO: Container collectd ready: true, restart count 0 Jun 3 22:19:52.222: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 22:19:52.222: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 22:19:52.222: INFO: node-exporter-g45bm from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 22:19:52.222: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 22:19:52.222: INFO: Container node-exporter ready: true, restart count 0 Jun 3 22:19:52.222: INFO: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 from monitoring started at 2022-06-03 20:16:39 +0000 UTC (1 container statuses recorded) Jun 3 22:19:52.222: INFO: Container tas-extender ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-45077828-e277-488b-86e4-c026d8841bdc 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.10.190.208 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-45077828-e277-488b-86e4-c026d8841bdc off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-45077828-e277-488b-86e4-c026d8841bdc [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 22:25:00.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9292" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.184 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":17,"completed":17,"skipped":5250,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJun 3 22:25:00.349: INFO: Running AfterSuite actions on all nodes Jun 3 22:25:00.349: INFO: Running AfterSuite actions on node 1 Jun 3 22:25:00.349: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":17,"skipped":5756,"failed":0} Ran 17 of 5773 Specs in 913.396 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5756 Skipped PASS Ginkgo ran 1 suite in 15m14.782939263s Test Suite Passed