I1030 01:26:49.972283 22 e2e.go:129] Starting e2e run "e88c6211-58a9-4124-ac28-1d1458274220" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1635557208 - Will randomize all specs Will run 17 of 5770 specs Oct 30 01:26:50.031: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:26:50.036: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 30 01:26:50.064: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 30 01:26:50.130: INFO: The status of Pod cmk-init-discover-node1-n4mcc is Succeeded, skipping waiting Oct 30 01:26:50.130: INFO: The status of Pod cmk-init-discover-node2-2fmmt is Succeeded, skipping waiting Oct 30 01:26:50.130: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 30 01:26:50.130: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 30 01:26:50.130: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 30 01:26:50.147: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Oct 30 01:26:50.147: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Oct 30 01:26:50.147: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Oct 30 01:26:50.147: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Oct 30 01:26:50.147: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Oct 30 01:26:50.147: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Oct 30 01:26:50.147: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Oct 30 01:26:50.147: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 30 01:26:50.147: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Oct 30 01:26:50.147: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Oct 30 01:26:50.147: INFO: e2e test version: v1.21.5 Oct 30 01:26:50.148: INFO: kube-apiserver version: v1.21.1 Oct 30 01:26:50.148: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:26:50.155: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:26:50.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption W1030 01:26:50.177355 22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 01:26:50.177: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 01:26:50.180: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Oct 30 01:26:50.190: INFO: Waiting up to 1m0s for all nodes to be ready Oct 30 01:27:50.243: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Oct 30 01:27:50.271: INFO: Created pod: pod0-sched-preemption-low-priority Oct 30 01:27:50.291: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:28:08.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7714" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:78.233 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":17,"completed":1,"skipped":163,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:28:08.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Oct 30 01:28:08.422: INFO: Waiting up to 1m0s for all nodes to be ready Oct 30 01:29:08.479: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:29:08.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Oct 30 01:29:12.537: INFO: found a healthy node: node2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:29:30.608: INFO: pods created so far: [1 1 1] Oct 30 01:29:30.608: INFO: length of pods created so far: 3 Oct 30 01:29:34.622: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:29:41.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-7269" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:29:41.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-6929" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:93.309 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":17,"completed":2,"skipped":258,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:29:41.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:29:47.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9483" for this suite. STEP: Destroying namespace "nsdeletetest-3465" for this suite. Oct 30 01:29:47.797: INFO: Namespace nsdeletetest-3465 was already deleted STEP: Destroying namespace "nsdeletetest-435" for this suite. • [SLOW TEST:6.092 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":17,"completed":3,"skipped":752,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:29:47.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:29:47.848: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Oct 30 01:29:47.854: INFO: Number of nodes with available pods: 0 Oct 30 01:29:47.854: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Oct 30 01:29:47.870: INFO: Number of nodes with available pods: 0 Oct 30 01:29:47.870: INFO: Node node1 is running more than one daemon pod Oct 30 01:29:48.874: INFO: Number of nodes with available pods: 0 Oct 30 01:29:48.874: INFO: Node node1 is running more than one daemon pod Oct 30 01:29:49.874: INFO: Number of nodes with available pods: 0 Oct 30 01:29:49.874: INFO: Node node1 is running more than one daemon pod Oct 30 01:29:50.875: INFO: Number of nodes with available pods: 1 Oct 30 01:29:50.875: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Oct 30 01:29:50.890: INFO: Number of nodes with available pods: 1 Oct 30 01:29:50.890: INFO: Number of running nodes: 0, number of available pods: 1 Oct 30 01:29:51.893: INFO: Number of nodes with available pods: 0 Oct 30 01:29:51.893: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Oct 30 01:29:51.905: INFO: Number of nodes with available pods: 0 Oct 30 01:29:51.905: INFO: Node node1 is running more than one daemon pod Oct 30 01:29:52.908: INFO: Number of nodes with available pods: 0 Oct 30 01:29:52.908: INFO: Node node1 is running more than one daemon pod Oct 30 01:29:53.909: INFO: Number of nodes with available pods: 0 Oct 30 01:29:53.909: INFO: Node node1 is running more than one daemon pod Oct 30 01:29:54.908: INFO: Number of nodes with available pods: 0 Oct 30 01:29:54.908: INFO: Node node1 is running more than one daemon pod Oct 30 01:29:55.908: INFO: Number of nodes with available pods: 0 Oct 30 01:29:55.908: INFO: Node node1 is running more than one daemon pod Oct 30 01:29:56.908: INFO: Number of nodes with available pods: 0 Oct 30 01:29:56.908: INFO: Node node1 is running more than one daemon pod Oct 30 01:29:57.909: INFO: Number of nodes with available pods: 0 Oct 30 01:29:57.909: INFO: Node node1 is running more than one daemon pod Oct 30 01:29:58.908: INFO: Number of nodes with available pods: 0 Oct 30 01:29:58.908: INFO: Node node1 is running more than one daemon pod Oct 30 01:29:59.908: INFO: Number of nodes with available pods: 0 Oct 30 01:29:59.908: INFO: Node node1 is running more than one daemon pod Oct 30 01:30:00.908: INFO: Number of nodes with available pods: 0 Oct 30 01:30:00.908: INFO: Node node1 is running more than one daemon pod Oct 30 01:30:01.909: INFO: Number of nodes with available pods: 0 Oct 30 01:30:01.909: INFO: Node node1 is running more than one daemon pod Oct 30 01:30:02.908: INFO: Number of nodes with available pods: 0 Oct 30 01:30:02.908: INFO: Node node1 is running more than one daemon pod Oct 30 01:30:03.908: INFO: Number of nodes with available pods: 0 Oct 30 01:30:03.908: INFO: Node node1 is running more than one daemon pod Oct 30 01:30:04.909: INFO: Number of nodes with available pods: 0 Oct 30 01:30:04.909: INFO: Node node1 is running more than one daemon pod Oct 30 01:30:05.908: INFO: Number of nodes with available pods: 1 Oct 30 01:30:05.908: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4214, will wait for the garbage collector to delete the pods Oct 30 01:30:05.971: INFO: Deleting DaemonSet.extensions daemon-set took: 4.177668ms Oct 30 01:30:06.071: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.268281ms Oct 30 01:30:22.874: INFO: Number of nodes with available pods: 0 Oct 30 01:30:22.874: INFO: Number of running nodes: 0, number of available pods: 0 Oct 30 01:30:22.880: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"100105"},"items":null} Oct 30 01:30:22.882: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"100105"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:30:22.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4214" for this suite. • [SLOW TEST:35.096 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":17,"completed":4,"skipped":1473,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:30:22.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Oct 30 01:30:22.939: INFO: Waiting up to 1m0s for all nodes to be ready Oct 30 01:31:22.991: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Oct 30 01:31:23.018: INFO: Created pod: pod0-sched-preemption-low-priority Oct 30 01:31:23.039: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:31:45.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-6026" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:82.212 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":17,"completed":5,"skipped":1560,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:31:45.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 01:31:45.153: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 01:31:45.162: INFO: Waiting for terminating namespaces to be deleted... Oct 30 01:31:45.164: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 01:31:45.180: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 01:31:45.180: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:31:45.180: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:31:45.180: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 01:31:45.180: INFO: Container discover ready: false, restart count 0 Oct 30 01:31:45.180: INFO: Container init ready: false, restart count 0 Oct 30 01:31:45.180: INFO: Container install ready: false, restart count 0 Oct 30 01:31:45.180: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 01:31:45.180: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:31:45.180: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 01:31:45.180: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:31:45.180: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 01:31:45.180: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:31:45.180: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 01:31:45.180: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 01:31:45.180: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 01:31:45.180: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:31:45.180: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 01:31:45.180: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:31:45.180: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 01:31:45.180: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:31:45.180: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 01:31:45.180: INFO: Container collectd ready: true, restart count 0 Oct 30 01:31:45.180: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:31:45.180: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:31:45.180: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 01:31:45.180: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:31:45.180: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:31:45.180: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 01:31:45.180: INFO: Container config-reloader ready: true, restart count 0 Oct 30 01:31:45.180: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 01:31:45.180: INFO: Container grafana ready: true, restart count 0 Oct 30 01:31:45.180: INFO: Container prometheus ready: true, restart count 1 Oct 30 01:31:45.180: INFO: preemptor-pod from sched-preemption-6026 started at 2021-10-30 01:31:38 +0000 UTC (1 container statuses recorded) Oct 30 01:31:45.180: INFO: Container preemptor-pod ready: true, restart count 0 Oct 30 01:31:45.180: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 01:31:45.192: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 01:31:45.192: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:31:45.192: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:31:45.192: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 01:31:45.192: INFO: Container discover ready: false, restart count 0 Oct 30 01:31:45.192: INFO: Container init ready: false, restart count 0 Oct 30 01:31:45.192: INFO: Container install ready: false, restart count 0 Oct 30 01:31:45.192: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 01:31:45.192: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 01:31:45.192: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 01:31:45.192: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 01:31:45.192: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 01:31:45.192: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:31:45.192: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 01:31:45.192: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:31:45.192: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 01:31:45.192: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 01:31:45.192: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 01:31:45.192: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:31:45.192: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 01:31:45.192: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:31:45.192: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 01:31:45.192: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:31:45.192: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 01:31:45.192: INFO: Container collectd ready: true, restart count 0 Oct 30 01:31:45.192: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:31:45.192: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:31:45.192: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 01:31:45.192: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:31:45.192: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:31:45.192: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 01:31:45.192: INFO: Container tas-extender ready: true, restart count 0 Oct 30 01:31:45.192: INFO: pod1-sched-preemption-medium-priority from sched-preemption-6026 started at 2021-10-30 01:31:30 +0000 UTC (1 container statuses recorded) Oct 30 01:31:45.192: INFO: Container pod1-sched-preemption-medium-priority ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-62ecf6f7-6585-4b8e-83b4-77cecde9acb4 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-62ecf6f7-6585-4b8e-83b4-77cecde9acb4 off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-62ecf6f7-6585-4b8e-83b4-77cecde9acb4 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:31:53.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3897" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.144 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":17,"completed":6,"skipped":2104,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:31:53.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 30 01:31:53.318: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:53.318: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:53.318: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:53.320: INFO: Number of nodes with available pods: 0 Oct 30 01:31:53.320: INFO: Node node1 is running more than one daemon pod Oct 30 01:31:54.325: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:54.325: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:54.326: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:54.330: INFO: Number of nodes with available pods: 0 Oct 30 01:31:54.330: INFO: Node node1 is running more than one daemon pod Oct 30 01:31:55.324: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:55.324: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:55.324: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:55.327: INFO: Number of nodes with available pods: 0 Oct 30 01:31:55.327: INFO: Node node1 is running more than one daemon pod Oct 30 01:31:56.328: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:56.328: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:56.328: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:56.332: INFO: Number of nodes with available pods: 1 Oct 30 01:31:56.332: INFO: Node node2 is running more than one daemon pod Oct 30 01:31:57.329: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:57.329: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:57.329: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:57.336: INFO: Number of nodes with available pods: 2 Oct 30 01:31:57.336: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Oct 30 01:31:57.352: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:57.352: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:57.352: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:57.355: INFO: Number of nodes with available pods: 1 Oct 30 01:31:57.355: INFO: Node node1 is running more than one daemon pod Oct 30 01:31:58.361: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:58.361: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:58.361: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:58.364: INFO: Number of nodes with available pods: 1 Oct 30 01:31:58.364: INFO: Node node1 is running more than one daemon pod Oct 30 01:31:59.361: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:59.361: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:59.361: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:31:59.364: INFO: Number of nodes with available pods: 1 Oct 30 01:31:59.364: INFO: Node node1 is running more than one daemon pod Oct 30 01:32:00.360: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:32:00.360: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:32:00.360: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:32:00.362: INFO: Number of nodes with available pods: 1 Oct 30 01:32:00.363: INFO: Node node1 is running more than one daemon pod Oct 30 01:32:01.361: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:32:01.361: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:32:01.361: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:32:01.364: INFO: Number of nodes with available pods: 1 Oct 30 01:32:01.364: INFO: Node node1 is running more than one daemon pod Oct 30 01:32:02.363: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:32:02.363: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:32:02.363: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:32:02.365: INFO: Number of nodes with available pods: 1 Oct 30 01:32:02.365: INFO: Node node1 is running more than one daemon pod Oct 30 01:32:03.361: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:32:03.361: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:32:03.361: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:32:03.364: INFO: Number of nodes with available pods: 1 Oct 30 01:32:03.364: INFO: Node node1 is running more than one daemon pod Oct 30 01:32:04.361: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:32:04.361: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:32:04.361: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:32:04.365: INFO: Number of nodes with available pods: 1 Oct 30 01:32:04.365: INFO: Node node1 is running more than one daemon pod Oct 30 01:32:05.360: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:32:05.361: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:32:05.361: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:32:05.363: INFO: Number of nodes with available pods: 2 Oct 30 01:32:05.363: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-86, will wait for the garbage collector to delete the pods Oct 30 01:32:05.424: INFO: Deleting DaemonSet.extensions daemon-set took: 4.325559ms Oct 30 01:32:05.524: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.082166ms Oct 30 01:32:09.228: INFO: Number of nodes with available pods: 0 Oct 30 01:32:09.228: INFO: Number of running nodes: 0, number of available pods: 0 Oct 30 01:32:09.230: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"100598"},"items":null} Oct 30 01:32:09.233: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"100598"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:32:09.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-86" for this suite. • [SLOW TEST:15.976 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":17,"completed":7,"skipped":2171,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:32:09.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Oct 30 01:32:09.304: INFO: Waiting up to 1m0s for all nodes to be ready Oct 30 01:33:09.355: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:33:09.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:33:09.391: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Oct 30 01:33:09.393: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:33:09.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-4054" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:33:09.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2344" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.191 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":17,"completed":8,"skipped":3212,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:33:09.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:33:09.496: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Oct 30 01:33:09.504: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:09.504: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:09.504: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:09.506: INFO: Number of nodes with available pods: 0 Oct 30 01:33:09.506: INFO: Node node1 is running more than one daemon pod Oct 30 01:33:10.511: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:10.511: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:10.511: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:10.513: INFO: Number of nodes with available pods: 0 Oct 30 01:33:10.513: INFO: Node node1 is running more than one daemon pod Oct 30 01:33:11.512: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:11.512: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:11.512: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:11.514: INFO: Number of nodes with available pods: 0 Oct 30 01:33:11.514: INFO: Node node1 is running more than one daemon pod Oct 30 01:33:12.514: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:12.515: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:12.515: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:12.517: INFO: Number of nodes with available pods: 1 Oct 30 01:33:12.517: INFO: Node node2 is running more than one daemon pod Oct 30 01:33:13.512: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:13.512: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:13.512: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:13.515: INFO: Number of nodes with available pods: 2 Oct 30 01:33:13.515: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Oct 30 01:33:13.539: INFO: Wrong image for pod: daemon-set-p789l. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 30 01:33:13.543: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:13.543: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:13.543: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:14.548: INFO: Wrong image for pod: daemon-set-p789l. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 30 01:33:14.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:14.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:14.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:15.548: INFO: Wrong image for pod: daemon-set-p789l. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 30 01:33:15.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:15.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:15.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:16.550: INFO: Wrong image for pod: daemon-set-p789l. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 30 01:33:16.554: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:16.554: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:16.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:17.549: INFO: Pod daemon-set-l6lf4 is not available Oct 30 01:33:17.549: INFO: Wrong image for pod: daemon-set-p789l. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 30 01:33:17.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:17.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:17.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:18.548: INFO: Pod daemon-set-l6lf4 is not available Oct 30 01:33:18.548: INFO: Wrong image for pod: daemon-set-p789l. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 30 01:33:18.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:18.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:18.552: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:19.548: INFO: Pod daemon-set-l6lf4 is not available Oct 30 01:33:19.548: INFO: Wrong image for pod: daemon-set-p789l. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 30 01:33:19.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:19.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:19.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:20.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:20.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:20.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:21.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:21.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:21.552: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:22.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:22.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:22.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:23.555: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:23.555: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:23.555: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:24.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:24.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:24.554: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:25.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:25.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:25.552: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:26.551: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:26.551: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:26.551: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:27.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:27.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:27.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:28.556: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:28.556: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:28.556: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:29.551: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:29.551: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:29.551: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:30.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:30.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:30.552: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:31.552: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:31.552: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:31.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:32.557: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:32.557: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:32.557: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:33.549: INFO: Pod daemon-set-gbqb4 is not available Oct 30 01:33:33.553: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:33.553: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:33.553: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Oct 30 01:33:33.557: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:33.557: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:33.557: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:33.560: INFO: Number of nodes with available pods: 1 Oct 30 01:33:33.560: INFO: Node node1 is running more than one daemon pod Oct 30 01:33:34.565: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:34.565: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:34.565: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:34.568: INFO: Number of nodes with available pods: 1 Oct 30 01:33:34.568: INFO: Node node1 is running more than one daemon pod Oct 30 01:33:35.565: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:35.565: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:35.565: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:35.567: INFO: Number of nodes with available pods: 1 Oct 30 01:33:35.567: INFO: Node node1 is running more than one daemon pod Oct 30 01:33:36.565: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:36.566: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:36.566: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:36.568: INFO: Number of nodes with available pods: 1 Oct 30 01:33:36.568: INFO: Node node1 is running more than one daemon pod Oct 30 01:33:37.566: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:37.566: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:37.566: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:33:37.569: INFO: Number of nodes with available pods: 2 Oct 30 01:33:37.569: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7206, will wait for the garbage collector to delete the pods Oct 30 01:33:37.638: INFO: Deleting DaemonSet.extensions daemon-set took: 3.742776ms Oct 30 01:33:37.739: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.02587ms Oct 30 01:33:45.344: INFO: Number of nodes with available pods: 0 Oct 30 01:33:45.344: INFO: Number of running nodes: 0, number of available pods: 0 Oct 30 01:33:45.346: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"100998"},"items":null} Oct 30 01:33:45.349: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"100998"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:33:45.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7206" for this suite. • [SLOW TEST:35.916 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":17,"completed":9,"skipped":3245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:33:45.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 01:33:45.401: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 01:33:45.408: INFO: Waiting for terminating namespaces to be deleted... Oct 30 01:33:45.410: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 01:33:45.422: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 01:33:45.422: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:33:45.422: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:33:45.422: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 01:33:45.422: INFO: Container discover ready: false, restart count 0 Oct 30 01:33:45.422: INFO: Container init ready: false, restart count 0 Oct 30 01:33:45.422: INFO: Container install ready: false, restart count 0 Oct 30 01:33:45.422: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 01:33:45.422: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:33:45.422: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 01:33:45.422: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:33:45.422: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 01:33:45.422: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:33:45.422: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 01:33:45.422: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 01:33:45.422: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 01:33:45.422: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:33:45.422: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 01:33:45.422: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:33:45.422: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 01:33:45.422: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:33:45.422: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 01:33:45.423: INFO: Container collectd ready: true, restart count 0 Oct 30 01:33:45.423: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:33:45.423: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:33:45.423: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 01:33:45.423: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:33:45.423: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:33:45.423: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 01:33:45.423: INFO: Container config-reloader ready: true, restart count 0 Oct 30 01:33:45.423: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 01:33:45.423: INFO: Container grafana ready: true, restart count 0 Oct 30 01:33:45.423: INFO: Container prometheus ready: true, restart count 1 Oct 30 01:33:45.423: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 01:33:45.432: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 01:33:45.433: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:33:45.433: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:33:45.433: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 01:33:45.433: INFO: Container discover ready: false, restart count 0 Oct 30 01:33:45.433: INFO: Container init ready: false, restart count 0 Oct 30 01:33:45.433: INFO: Container install ready: false, restart count 0 Oct 30 01:33:45.433: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 01:33:45.433: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 01:33:45.433: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 01:33:45.433: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 01:33:45.433: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 01:33:45.433: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:33:45.433: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 01:33:45.433: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:33:45.433: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 01:33:45.433: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 01:33:45.433: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 01:33:45.433: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:33:45.433: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 01:33:45.433: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:33:45.433: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 01:33:45.433: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:33:45.433: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 01:33:45.433: INFO: Container collectd ready: true, restart count 0 Oct 30 01:33:45.433: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:33:45.433: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:33:45.433: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 01:33:45.433: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:33:45.433: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:33:45.433: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 01:33:45.433: INFO: Container tas-extender ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-922b74b4-c6d1-46fc-928f-174c65ea8bc9 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.10.190.208 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-922b74b4-c6d1-46fc-928f-174c65ea8bc9 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-922b74b4-c6d1-46fc-928f-174c65ea8bc9 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:38:53.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6938" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.160 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":17,"completed":10,"skipped":3447,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:38:53.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:39:24.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7469" for this suite. STEP: Destroying namespace "nsdeletetest-3748" for this suite. Oct 30 01:39:24.644: INFO: Namespace nsdeletetest-3748 was already deleted STEP: Destroying namespace "nsdeletetest-8434" for this suite. • [SLOW TEST:31.104 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":17,"completed":11,"skipped":3748,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:39:24.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Oct 30 01:39:24.970: INFO: Pod name wrapped-volume-race-af3fbf0b-b947-4c76-8e8e-7011348acdaa: Found 2 pods out of 5 Oct 30 01:39:29.980: INFO: Pod name wrapped-volume-race-af3fbf0b-b947-4c76-8e8e-7011348acdaa: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-af3fbf0b-b947-4c76-8e8e-7011348acdaa in namespace emptydir-wrapper-2823, will wait for the garbage collector to delete the pods Oct 30 01:39:46.061: INFO: Deleting ReplicationController wrapped-volume-race-af3fbf0b-b947-4c76-8e8e-7011348acdaa took: 5.246746ms Oct 30 01:39:46.162: INFO: Terminating ReplicationController wrapped-volume-race-af3fbf0b-b947-4c76-8e8e-7011348acdaa pods took: 100.382272ms STEP: Creating RC which spawns configmap-volume pods Oct 30 01:40:02.980: INFO: Pod name wrapped-volume-race-b01999e6-8285-42e7-806f-049ec18ec574: Found 0 pods out of 5 Oct 30 01:40:07.989: INFO: Pod name wrapped-volume-race-b01999e6-8285-42e7-806f-049ec18ec574: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b01999e6-8285-42e7-806f-049ec18ec574 in namespace emptydir-wrapper-2823, will wait for the garbage collector to delete the pods Oct 30 01:40:22.070: INFO: Deleting ReplicationController wrapped-volume-race-b01999e6-8285-42e7-806f-049ec18ec574 took: 4.587464ms Oct 30 01:40:22.171: INFO: Terminating ReplicationController wrapped-volume-race-b01999e6-8285-42e7-806f-049ec18ec574 pods took: 100.475756ms STEP: Creating RC which spawns configmap-volume pods Oct 30 01:40:32.989: INFO: Pod name wrapped-volume-race-9c3612c0-0b36-4cd4-83ab-6e6ff4bff848: Found 0 pods out of 5 Oct 30 01:40:37.997: INFO: Pod name wrapped-volume-race-9c3612c0-0b36-4cd4-83ab-6e6ff4bff848: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9c3612c0-0b36-4cd4-83ab-6e6ff4bff848 in namespace emptydir-wrapper-2823, will wait for the garbage collector to delete the pods Oct 30 01:40:50.077: INFO: Deleting ReplicationController wrapped-volume-race-9c3612c0-0b36-4cd4-83ab-6e6ff4bff848 took: 3.944903ms Oct 30 01:40:50.177: INFO: Terminating ReplicationController wrapped-volume-race-9c3612c0-0b36-4cd4-83ab-6e6ff4bff848 pods took: 100.362553ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:41:03.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2823" for this suite. • [SLOW TEST:98.416 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":17,"completed":12,"skipped":4796,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:41:03.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 01:41:03.104: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 01:41:03.113: INFO: Waiting for terminating namespaces to be deleted... Oct 30 01:41:03.115: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 01:41:03.128: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 01:41:03.128: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:41:03.129: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:41:03.129: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 01:41:03.129: INFO: Container discover ready: false, restart count 0 Oct 30 01:41:03.129: INFO: Container init ready: false, restart count 0 Oct 30 01:41:03.129: INFO: Container install ready: false, restart count 0 Oct 30 01:41:03.129: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 01:41:03.129: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:41:03.129: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 01:41:03.129: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:41:03.129: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 01:41:03.129: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:41:03.129: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 01:41:03.129: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 01:41:03.129: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 01:41:03.129: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:41:03.129: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 01:41:03.129: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:41:03.129: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 01:41:03.129: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:41:03.129: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 01:41:03.129: INFO: Container collectd ready: true, restart count 0 Oct 30 01:41:03.129: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:41:03.129: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:41:03.129: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 01:41:03.129: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:41:03.129: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:41:03.129: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 01:41:03.129: INFO: Container config-reloader ready: true, restart count 0 Oct 30 01:41:03.129: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 01:41:03.129: INFO: Container grafana ready: true, restart count 0 Oct 30 01:41:03.129: INFO: Container prometheus ready: true, restart count 1 Oct 30 01:41:03.129: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 01:41:03.136: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 01:41:03.136: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:41:03.136: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:41:03.136: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 01:41:03.136: INFO: Container discover ready: false, restart count 0 Oct 30 01:41:03.136: INFO: Container init ready: false, restart count 0 Oct 30 01:41:03.136: INFO: Container install ready: false, restart count 0 Oct 30 01:41:03.136: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 01:41:03.136: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 01:41:03.136: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 01:41:03.136: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 01:41:03.136: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 01:41:03.136: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:41:03.136: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 01:41:03.136: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:41:03.136: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 01:41:03.136: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 01:41:03.136: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 01:41:03.136: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:41:03.136: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 01:41:03.136: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:41:03.136: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 01:41:03.136: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:41:03.136: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 01:41:03.136: INFO: Container collectd ready: true, restart count 0 Oct 30 01:41:03.136: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:41:03.136: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:41:03.136: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 01:41:03.136: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:41:03.136: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:41:03.136: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 01:41:03.136: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16b2ab6494dbade7], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:41:04.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3232" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":17,"completed":13,"skipped":4869,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:41:04.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 01:41:04.212: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 01:41:04.221: INFO: Waiting for terminating namespaces to be deleted... Oct 30 01:41:04.225: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 01:41:04.245: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 01:41:04.245: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:41:04.245: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:41:04.245: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 01:41:04.245: INFO: Container discover ready: false, restart count 0 Oct 30 01:41:04.245: INFO: Container init ready: false, restart count 0 Oct 30 01:41:04.245: INFO: Container install ready: false, restart count 0 Oct 30 01:41:04.245: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 01:41:04.245: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:41:04.245: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 01:41:04.245: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:41:04.245: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 01:41:04.245: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:41:04.245: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 01:41:04.245: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 01:41:04.245: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 01:41:04.245: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:41:04.245: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 01:41:04.245: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:41:04.245: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 01:41:04.245: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:41:04.245: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 01:41:04.245: INFO: Container collectd ready: true, restart count 0 Oct 30 01:41:04.245: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:41:04.245: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:41:04.245: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 01:41:04.245: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:41:04.245: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:41:04.245: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 01:41:04.245: INFO: Container config-reloader ready: true, restart count 0 Oct 30 01:41:04.245: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 01:41:04.245: INFO: Container grafana ready: true, restart count 0 Oct 30 01:41:04.245: INFO: Container prometheus ready: true, restart count 1 Oct 30 01:41:04.245: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 01:41:04.258: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 01:41:04.258: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:41:04.258: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:41:04.258: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 01:41:04.258: INFO: Container discover ready: false, restart count 0 Oct 30 01:41:04.258: INFO: Container init ready: false, restart count 0 Oct 30 01:41:04.258: INFO: Container install ready: false, restart count 0 Oct 30 01:41:04.258: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 01:41:04.258: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 01:41:04.258: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 01:41:04.258: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 01:41:04.258: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 01:41:04.258: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:41:04.258: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 01:41:04.258: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:41:04.258: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 01:41:04.258: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 01:41:04.258: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 01:41:04.258: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:41:04.258: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 01:41:04.258: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:41:04.258: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 01:41:04.259: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:41:04.259: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 01:41:04.259: INFO: Container collectd ready: true, restart count 0 Oct 30 01:41:04.259: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:41:04.259: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:41:04.259: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 01:41:04.259: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:41:04.259: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:41:04.259: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 01:41:04.259: INFO: Container tas-extender ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: verifying the node has the label node node1 STEP: verifying the node has the label node node2 Oct 30 01:41:10.383: INFO: Pod cmk-89lqq requesting resource cpu=0m on Node node1 Oct 30 01:41:10.383: INFO: Pod cmk-8bpbf requesting resource cpu=0m on Node node2 Oct 30 01:41:10.383: INFO: Pod cmk-webhook-6c9d5f8578-ffk66 requesting resource cpu=0m on Node node2 Oct 30 01:41:10.383: INFO: Pod kube-flannel-f6s5v requesting resource cpu=150m on Node node2 Oct 30 01:41:10.383: INFO: Pod kube-flannel-phg88 requesting resource cpu=150m on Node node1 Oct 30 01:41:10.383: INFO: Pod kube-multus-ds-amd64-68wrz requesting resource cpu=100m on Node node1 Oct 30 01:41:10.383: INFO: Pod kube-multus-ds-amd64-7tvbl requesting resource cpu=100m on Node node2 Oct 30 01:41:10.383: INFO: Pod kube-proxy-76285 requesting resource cpu=0m on Node node2 Oct 30 01:41:10.383: INFO: Pod kube-proxy-z5hqt requesting resource cpu=0m on Node node1 Oct 30 01:41:10.383: INFO: Pod kubernetes-dashboard-785dcbb76d-pbjjt requesting resource cpu=50m on Node node2 Oct 30 01:41:10.383: INFO: Pod kubernetes-metrics-scraper-5558854cb-5rmjw requesting resource cpu=0m on Node node1 Oct 30 01:41:10.383: INFO: Pod nginx-proxy-node1 requesting resource cpu=25m on Node node1 Oct 30 01:41:10.383: INFO: Pod nginx-proxy-node2 requesting resource cpu=25m on Node node2 Oct 30 01:41:10.383: INFO: Pod node-feature-discovery-worker-h6lcp requesting resource cpu=0m on Node node2 Oct 30 01:41:10.383: INFO: Pod node-feature-discovery-worker-w5vdb requesting resource cpu=0m on Node node1 Oct 30 01:41:10.383: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg requesting resource cpu=0m on Node node2 Oct 30 01:41:10.383: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-t789r requesting resource cpu=0m on Node node1 Oct 30 01:41:10.383: INFO: Pod collectd-d45rv requesting resource cpu=0m on Node node1 Oct 30 01:41:10.383: INFO: Pod collectd-flvhl requesting resource cpu=0m on Node node2 Oct 30 01:41:10.383: INFO: Pod node-exporter-256wm requesting resource cpu=112m on Node node1 Oct 30 01:41:10.383: INFO: Pod node-exporter-r77s4 requesting resource cpu=112m on Node node2 Oct 30 01:41:10.383: INFO: Pod prometheus-k8s-0 requesting resource cpu=200m on Node node1 Oct 30 01:41:10.383: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-989mh requesting resource cpu=0m on Node node2 STEP: Starting Pods to consume most of the cluster CPU. Oct 30 01:41:10.383: INFO: Creating a pod which consumes cpu=53594m on Node node2 Oct 30 01:41:10.395: INFO: Creating a pod which consumes cpu=53489m on Node node1 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-1be4d436-8278-47ae-96fe-1cf83fb40b7c.16b2ab6643ab6670], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6959/filler-pod-1be4d436-8278-47ae-96fe-1cf83fb40b7c to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-1be4d436-8278-47ae-96fe-1cf83fb40b7c.16b2ab66d4a0d144], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-1be4d436-8278-47ae-96fe-1cf83fb40b7c.16b2ab66e66a184b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 298.396805ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-1be4d436-8278-47ae-96fe-1cf83fb40b7c.16b2ab66ee01d162], Reason = [Created], Message = [Created container filler-pod-1be4d436-8278-47ae-96fe-1cf83fb40b7c] STEP: Considering event: Type = [Normal], Name = [filler-pod-1be4d436-8278-47ae-96fe-1cf83fb40b7c.16b2ab66f59b8b4f], Reason = [Started], Message = [Started container filler-pod-1be4d436-8278-47ae-96fe-1cf83fb40b7c] STEP: Considering event: Type = [Normal], Name = [filler-pod-aa94401a-69ba-48dd-8412-265629e64ebe.16b2ab664414a748], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6959/filler-pod-aa94401a-69ba-48dd-8412-265629e64ebe to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-aa94401a-69ba-48dd-8412-265629e64ebe.16b2ab66e5cc01e2], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-aa94401a-69ba-48dd-8412-265629e64ebe.16b2ab66f85d0d63], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 311.488877ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-aa94401a-69ba-48dd-8412-265629e64ebe.16b2ab66fed59951], Reason = [Created], Message = [Created container filler-pod-aa94401a-69ba-48dd-8412-265629e64ebe] STEP: Considering event: Type = [Normal], Name = [filler-pod-aa94401a-69ba-48dd-8412-265629e64ebe.16b2ab6706674514], Reason = [Started], Message = [Started container filler-pod-aa94401a-69ba-48dd-8412-265629e64ebe] STEP: Considering event: Type = [Warning], Name = [additional-pod.16b2ab6733aa7ac9], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: removing the label node off the node node1 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node node2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:41:15.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6959" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:11.275 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":17,"completed":14,"skipped":5290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:41:15.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:41:15.515: INFO: Create a RollingUpdate DaemonSet Oct 30 01:41:15.519: INFO: Check that daemon pods launch on every node of the cluster Oct 30 01:41:15.523: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:15.523: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:15.523: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:15.525: INFO: Number of nodes with available pods: 0 Oct 30 01:41:15.525: INFO: Node node1 is running more than one daemon pod Oct 30 01:41:16.531: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:16.531: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:16.531: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:16.534: INFO: Number of nodes with available pods: 0 Oct 30 01:41:16.534: INFO: Node node1 is running more than one daemon pod Oct 30 01:41:17.532: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:17.532: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:17.532: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:17.535: INFO: Number of nodes with available pods: 0 Oct 30 01:41:17.535: INFO: Node node1 is running more than one daemon pod Oct 30 01:41:18.530: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:18.531: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:18.531: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:18.533: INFO: Number of nodes with available pods: 2 Oct 30 01:41:18.533: INFO: Number of running nodes: 2, number of available pods: 2 Oct 30 01:41:18.533: INFO: Update the DaemonSet to trigger a rollout Oct 30 01:41:18.539: INFO: Updating DaemonSet daemon-set Oct 30 01:41:33.555: INFO: Roll back the DaemonSet before rollout is complete Oct 30 01:41:33.564: INFO: Updating DaemonSet daemon-set Oct 30 01:41:33.564: INFO: Make sure DaemonSet rollback is complete Oct 30 01:41:33.568: INFO: Wrong image for pod: daemon-set-vfmhq. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Oct 30 01:41:33.568: INFO: Pod daemon-set-vfmhq is not available Oct 30 01:41:33.572: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:33.573: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:33.573: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:34.584: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:34.585: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:34.585: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:35.582: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:35.582: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:35.582: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:36.578: INFO: Pod daemon-set-qv7hk is not available Oct 30 01:41:36.583: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:36.583: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:36.583: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-911, will wait for the garbage collector to delete the pods Oct 30 01:41:36.645: INFO: Deleting DaemonSet.extensions daemon-set took: 4.498635ms Oct 30 01:41:36.746: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.098795ms Oct 30 01:41:52.850: INFO: Number of nodes with available pods: 0 Oct 30 01:41:52.850: INFO: Number of running nodes: 0, number of available pods: 0 Oct 30 01:41:52.853: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"103401"},"items":null} Oct 30 01:41:52.856: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"103401"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:41:52.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-911" for this suite. • [SLOW TEST:37.406 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":17,"completed":15,"skipped":5488,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:41:52.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 30 01:41:52.938: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:52.938: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:52.938: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:52.945: INFO: Number of nodes with available pods: 0 Oct 30 01:41:52.945: INFO: Node node1 is running more than one daemon pod Oct 30 01:41:53.950: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:53.950: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:53.950: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:53.953: INFO: Number of nodes with available pods: 0 Oct 30 01:41:53.953: INFO: Node node1 is running more than one daemon pod Oct 30 01:41:54.952: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:54.952: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:54.952: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:54.955: INFO: Number of nodes with available pods: 0 Oct 30 01:41:54.955: INFO: Node node1 is running more than one daemon pod Oct 30 01:41:55.952: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:55.952: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:55.952: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:55.955: INFO: Number of nodes with available pods: 2 Oct 30 01:41:55.955: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Oct 30 01:41:55.972: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:55.972: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:55.973: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:55.975: INFO: Number of nodes with available pods: 1 Oct 30 01:41:55.975: INFO: Node node1 is running more than one daemon pod Oct 30 01:41:56.983: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:56.983: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:56.983: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:56.986: INFO: Number of nodes with available pods: 1 Oct 30 01:41:56.986: INFO: Node node1 is running more than one daemon pod Oct 30 01:41:57.980: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:57.980: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:57.980: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:57.983: INFO: Number of nodes with available pods: 1 Oct 30 01:41:57.983: INFO: Node node1 is running more than one daemon pod Oct 30 01:41:58.979: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:58.979: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:58.979: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:58.982: INFO: Number of nodes with available pods: 1 Oct 30 01:41:58.982: INFO: Node node1 is running more than one daemon pod Oct 30 01:41:59.982: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:59.983: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:59.983: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:41:59.986: INFO: Number of nodes with available pods: 2 Oct 30 01:41:59.986: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5108, will wait for the garbage collector to delete the pods Oct 30 01:42:00.049: INFO: Deleting DaemonSet.extensions daemon-set took: 4.776841ms Oct 30 01:42:00.150: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.278349ms Oct 30 01:42:12.952: INFO: Number of nodes with available pods: 0 Oct 30 01:42:12.952: INFO: Number of running nodes: 0, number of available pods: 0 Oct 30 01:42:12.954: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"103554"},"items":null} Oct 30 01:42:12.961: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"103554"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:42:12.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5108" for this suite. • [SLOW TEST:20.107 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":17,"completed":16,"skipped":5510,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:42:12.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:42:13.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9325" for this suite. STEP: Destroying namespace "nspatchtest-1a14f63c-6469-42ad-b0b7-7a18ccb52519-6856" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":17,"completed":17,"skipped":5667,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 30 01:42:13.054: INFO: Running AfterSuite actions on all nodes Oct 30 01:42:13.054: INFO: Running AfterSuite actions on node 1 Oct 30 01:42:13.054: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":17,"skipped":5753,"failed":0} Ran 17 of 5770 Specs in 923.028 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5753 Skipped PASS Ginkgo ran 1 suite in 15m24.37775403s Test Suite Passed