I1030 01:44:25.824017 23 e2e.go:129] Starting e2e run "e520e012-2be1-4ed7-8268-10da2bea6c6b" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1635558264 - Will randomize all specs Will run 17 of 5770 specs Oct 30 01:44:25.883: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:44:25.888: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 30 01:44:25.917: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 30 01:44:25.986: INFO: The status of Pod cmk-init-discover-node1-n4mcc is Succeeded, skipping waiting Oct 30 01:44:25.986: INFO: The status of Pod cmk-init-discover-node2-2fmmt is Succeeded, skipping waiting Oct 30 01:44:25.986: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 30 01:44:25.986: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 30 01:44:25.986: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 30 01:44:26.005: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Oct 30 01:44:26.005: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Oct 30 01:44:26.005: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Oct 30 01:44:26.005: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Oct 30 01:44:26.005: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Oct 30 01:44:26.005: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Oct 30 01:44:26.005: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Oct 30 01:44:26.005: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 30 01:44:26.005: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Oct 30 01:44:26.005: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Oct 30 01:44:26.005: INFO: e2e test version: v1.21.5 Oct 30 01:44:26.007: INFO: kube-apiserver version: v1.21.1 Oct 30 01:44:26.007: INFO: >>> kubeConfig: /root/.kube/config Oct 30 01:44:26.013: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:44:26.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper W1030 01:44:26.040759 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 01:44:26.041: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 01:44:26.044: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Oct 30 01:44:26.424: INFO: Pod name wrapped-volume-race-580538f0-b273-4c6f-a88f-f1e7b6c0169e: Found 2 pods out of 5 Oct 30 01:44:31.434: INFO: Pod name wrapped-volume-race-580538f0-b273-4c6f-a88f-f1e7b6c0169e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-580538f0-b273-4c6f-a88f-f1e7b6c0169e in namespace emptydir-wrapper-4222, will wait for the garbage collector to delete the pods Oct 30 01:44:47.513: INFO: Deleting ReplicationController wrapped-volume-race-580538f0-b273-4c6f-a88f-f1e7b6c0169e took: 5.156638ms Oct 30 01:44:47.614: INFO: Terminating ReplicationController wrapped-volume-race-580538f0-b273-4c6f-a88f-f1e7b6c0169e pods took: 101.156184ms STEP: Creating RC which spawns configmap-volume pods Oct 30 01:45:03.034: INFO: Pod name wrapped-volume-race-f8555aa8-3680-46e1-9385-28b38f42d1b8: Found 0 pods out of 5 Oct 30 01:45:08.043: INFO: Pod name wrapped-volume-race-f8555aa8-3680-46e1-9385-28b38f42d1b8: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f8555aa8-3680-46e1-9385-28b38f42d1b8 in namespace emptydir-wrapper-4222, will wait for the garbage collector to delete the pods Oct 30 01:45:22.125: INFO: Deleting ReplicationController wrapped-volume-race-f8555aa8-3680-46e1-9385-28b38f42d1b8 took: 5.406164ms Oct 30 01:45:22.226: INFO: Terminating ReplicationController wrapped-volume-race-f8555aa8-3680-46e1-9385-28b38f42d1b8 pods took: 101.194226ms STEP: Creating RC which spawns configmap-volume pods Oct 30 01:45:32.943: INFO: Pod name wrapped-volume-race-bc6c063a-4044-4395-bd21-bb6c0d4e55ca: Found 0 pods out of 5 Oct 30 01:45:37.952: INFO: Pod name wrapped-volume-race-bc6c063a-4044-4395-bd21-bb6c0d4e55ca: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-bc6c063a-4044-4395-bd21-bb6c0d4e55ca in namespace emptydir-wrapper-4222, will wait for the garbage collector to delete the pods Oct 30 01:46:00.035: INFO: Deleting ReplicationController wrapped-volume-race-bc6c063a-4044-4395-bd21-bb6c0d4e55ca took: 5.599775ms Oct 30 01:46:00.137: INFO: Terminating ReplicationController wrapped-volume-race-bc6c063a-4044-4395-bd21-bb6c0d4e55ca pods took: 101.195964ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:46:13.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4222" for this suite. • [SLOW TEST:107.128 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":17,"completed":1,"skipped":342,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:46:13.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Oct 30 01:46:13.178: INFO: Waiting up to 1m0s for all nodes to be ready Oct 30 01:47:13.231: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Oct 30 01:47:13.259: INFO: Created pod: pod0-sched-preemption-low-priority Oct 30 01:47:13.279: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:47:35.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1751" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:82.210 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":17,"completed":2,"skipped":482,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:47:35.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Oct 30 01:47:35.399: INFO: Waiting up to 1m0s for all nodes to be ready Oct 30 01:48:35.454: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:48:35.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:48:35.492: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Oct 30 01:48:35.495: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:48:35.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-4305" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:48:35.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-4695" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.216 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":17,"completed":3,"skipped":495,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:48:35.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 30 01:48:35.617: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:35.617: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:35.617: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:35.619: INFO: Number of nodes with available pods: 0 Oct 30 01:48:35.619: INFO: Node node1 is running more than one daemon pod Oct 30 01:48:36.625: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:36.625: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:36.625: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:36.628: INFO: Number of nodes with available pods: 0 Oct 30 01:48:36.628: INFO: Node node1 is running more than one daemon pod Oct 30 01:48:37.627: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:37.627: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:37.627: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:37.630: INFO: Number of nodes with available pods: 0 Oct 30 01:48:37.630: INFO: Node node1 is running more than one daemon pod Oct 30 01:48:38.628: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:38.628: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:38.628: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:38.630: INFO: Number of nodes with available pods: 1 Oct 30 01:48:38.630: INFO: Node node1 is running more than one daemon pod Oct 30 01:48:39.624: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:39.624: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:39.624: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:39.627: INFO: Number of nodes with available pods: 2 Oct 30 01:48:39.627: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Oct 30 01:48:39.640: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:39.641: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:39.641: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:39.643: INFO: Number of nodes with available pods: 1 Oct 30 01:48:39.643: INFO: Node node2 is running more than one daemon pod Oct 30 01:48:40.652: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:40.652: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:40.653: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:40.656: INFO: Number of nodes with available pods: 1 Oct 30 01:48:40.656: INFO: Node node2 is running more than one daemon pod Oct 30 01:48:41.647: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:41.647: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:41.647: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:41.650: INFO: Number of nodes with available pods: 1 Oct 30 01:48:41.650: INFO: Node node2 is running more than one daemon pod Oct 30 01:48:42.649: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:42.649: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:42.649: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:42.652: INFO: Number of nodes with available pods: 1 Oct 30 01:48:42.652: INFO: Node node2 is running more than one daemon pod Oct 30 01:48:43.649: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:43.649: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:43.649: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:43.652: INFO: Number of nodes with available pods: 1 Oct 30 01:48:43.652: INFO: Node node2 is running more than one daemon pod Oct 30 01:48:44.648: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:44.648: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:44.649: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:44.651: INFO: Number of nodes with available pods: 1 Oct 30 01:48:44.651: INFO: Node node2 is running more than one daemon pod Oct 30 01:48:45.648: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:45.648: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:45.648: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:45.652: INFO: Number of nodes with available pods: 1 Oct 30 01:48:45.652: INFO: Node node2 is running more than one daemon pod Oct 30 01:48:46.650: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:46.650: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:46.650: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:48:46.653: INFO: Number of nodes with available pods: 2 Oct 30 01:48:46.653: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1205, will wait for the garbage collector to delete the pods Oct 30 01:48:46.712: INFO: Deleting DaemonSet.extensions daemon-set took: 4.453657ms Oct 30 01:48:46.812: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.371988ms Oct 30 01:48:52.915: INFO: Number of nodes with available pods: 0 Oct 30 01:48:52.915: INFO: Number of running nodes: 0, number of available pods: 0 Oct 30 01:48:52.921: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"105662"},"items":null} Oct 30 01:48:52.924: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"105662"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:48:52.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1205" for this suite. • [SLOW TEST:17.367 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":17,"completed":4,"skipped":516,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:48:52.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:48:52.988: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Oct 30 01:48:52.995: INFO: Number of nodes with available pods: 0 Oct 30 01:48:52.995: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Oct 30 01:48:53.020: INFO: Number of nodes with available pods: 0 Oct 30 01:48:53.020: INFO: Node node2 is running more than one daemon pod Oct 30 01:48:54.024: INFO: Number of nodes with available pods: 0 Oct 30 01:48:54.024: INFO: Node node2 is running more than one daemon pod Oct 30 01:48:55.024: INFO: Number of nodes with available pods: 0 Oct 30 01:48:55.024: INFO: Node node2 is running more than one daemon pod Oct 30 01:48:56.024: INFO: Number of nodes with available pods: 0 Oct 30 01:48:56.024: INFO: Node node2 is running more than one daemon pod Oct 30 01:48:57.027: INFO: Number of nodes with available pods: 1 Oct 30 01:48:57.027: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Oct 30 01:48:57.045: INFO: Number of nodes with available pods: 1 Oct 30 01:48:57.045: INFO: Number of running nodes: 0, number of available pods: 1 Oct 30 01:48:58.050: INFO: Number of nodes with available pods: 0 Oct 30 01:48:58.050: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Oct 30 01:48:58.057: INFO: Number of nodes with available pods: 0 Oct 30 01:48:58.057: INFO: Node node2 is running more than one daemon pod Oct 30 01:48:59.063: INFO: Number of nodes with available pods: 0 Oct 30 01:48:59.063: INFO: Node node2 is running more than one daemon pod Oct 30 01:49:00.063: INFO: Number of nodes with available pods: 0 Oct 30 01:49:00.063: INFO: Node node2 is running more than one daemon pod Oct 30 01:49:01.060: INFO: Number of nodes with available pods: 0 Oct 30 01:49:01.060: INFO: Node node2 is running more than one daemon pod Oct 30 01:49:02.063: INFO: Number of nodes with available pods: 0 Oct 30 01:49:02.063: INFO: Node node2 is running more than one daemon pod Oct 30 01:49:03.063: INFO: Number of nodes with available pods: 0 Oct 30 01:49:03.063: INFO: Node node2 is running more than one daemon pod Oct 30 01:49:04.063: INFO: Number of nodes with available pods: 0 Oct 30 01:49:04.063: INFO: Node node2 is running more than one daemon pod Oct 30 01:49:05.061: INFO: Number of nodes with available pods: 0 Oct 30 01:49:05.061: INFO: Node node2 is running more than one daemon pod Oct 30 01:49:06.060: INFO: Number of nodes with available pods: 1 Oct 30 01:49:06.060: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1656, will wait for the garbage collector to delete the pods Oct 30 01:49:06.123: INFO: Deleting DaemonSet.extensions daemon-set took: 4.929254ms Oct 30 01:49:06.224: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.634348ms Oct 30 01:49:09.427: INFO: Number of nodes with available pods: 0 Oct 30 01:49:09.427: INFO: Number of running nodes: 0, number of available pods: 0 Oct 30 01:49:09.431: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"105789"},"items":null} Oct 30 01:49:09.433: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"105789"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:49:09.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1656" for this suite. • [SLOW TEST:16.508 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":17,"completed":5,"skipped":1034,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:49:09.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:49:09.498: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Oct 30 01:49:09.510: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:09.510: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:09.510: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:09.516: INFO: Number of nodes with available pods: 0 Oct 30 01:49:09.517: INFO: Node node1 is running more than one daemon pod Oct 30 01:49:10.522: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:10.522: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:10.522: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:10.525: INFO: Number of nodes with available pods: 0 Oct 30 01:49:10.525: INFO: Node node1 is running more than one daemon pod Oct 30 01:49:11.522: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:11.522: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:11.522: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:11.526: INFO: Number of nodes with available pods: 0 Oct 30 01:49:11.526: INFO: Node node1 is running more than one daemon pod Oct 30 01:49:12.523: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:12.523: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:12.523: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:12.526: INFO: Number of nodes with available pods: 0 Oct 30 01:49:12.526: INFO: Node node1 is running more than one daemon pod Oct 30 01:49:13.523: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:13.523: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:13.523: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:13.525: INFO: Number of nodes with available pods: 2 Oct 30 01:49:13.525: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Oct 30 01:49:13.547: INFO: Wrong image for pod: daemon-set-5kd2f. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 30 01:49:13.547: INFO: Wrong image for pod: daemon-set-pcggs. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 30 01:49:13.551: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:13.551: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:13.551: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:14.556: INFO: Wrong image for pod: daemon-set-5kd2f. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 30 01:49:14.560: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:14.560: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:14.561: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:15.557: INFO: Wrong image for pod: daemon-set-5kd2f. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 30 01:49:15.562: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:15.562: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:15.562: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:16.557: INFO: Wrong image for pod: daemon-set-5kd2f. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 30 01:49:16.561: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:16.561: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:16.561: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:17.555: INFO: Wrong image for pod: daemon-set-5kd2f. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 30 01:49:17.555: INFO: Pod daemon-set-l66bb is not available Oct 30 01:49:17.559: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:17.559: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:17.559: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:18.557: INFO: Wrong image for pod: daemon-set-5kd2f. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 30 01:49:18.557: INFO: Pod daemon-set-l66bb is not available Oct 30 01:49:18.561: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:18.561: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:18.562: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:19.557: INFO: Wrong image for pod: daemon-set-5kd2f. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Oct 30 01:49:19.557: INFO: Pod daemon-set-l66bb is not available Oct 30 01:49:19.562: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:19.562: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:19.562: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:20.563: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:20.563: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:20.563: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:21.562: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:21.563: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:21.563: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:22.560: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:22.560: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:22.560: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:23.560: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:23.560: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:23.560: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:24.562: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:24.562: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:24.562: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:25.563: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:25.563: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:25.563: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:26.561: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:26.561: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:26.561: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:27.563: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:27.563: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:27.564: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:28.562: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:28.562: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:28.562: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:29.562: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:29.562: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:29.562: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:30.561: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:30.561: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:30.561: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:31.562: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:31.562: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:31.562: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:32.562: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:32.562: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:32.562: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:33.558: INFO: Pod daemon-set-cqgr9 is not available Oct 30 01:49:33.563: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:33.563: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:33.563: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Oct 30 01:49:33.568: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:33.568: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:33.568: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:33.570: INFO: Number of nodes with available pods: 1 Oct 30 01:49:33.570: INFO: Node node2 is running more than one daemon pod Oct 30 01:49:34.578: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:34.578: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:34.578: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:34.581: INFO: Number of nodes with available pods: 1 Oct 30 01:49:34.581: INFO: Node node2 is running more than one daemon pod Oct 30 01:49:35.579: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:35.579: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:35.579: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:49:35.583: INFO: Number of nodes with available pods: 2 Oct 30 01:49:35.583: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9297, will wait for the garbage collector to delete the pods Oct 30 01:49:35.655: INFO: Deleting DaemonSet.extensions daemon-set took: 4.82813ms Oct 30 01:49:35.755: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.638276ms Oct 30 01:49:42.858: INFO: Number of nodes with available pods: 0 Oct 30 01:49:42.858: INFO: Number of running nodes: 0, number of available pods: 0 Oct 30 01:49:42.860: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"105983"},"items":null} Oct 30 01:49:42.862: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"105983"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:49:42.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9297" for this suite. • [SLOW TEST:33.423 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":17,"completed":6,"skipped":1071,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:49:42.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:49:42.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9771" for this suite. STEP: Destroying namespace "nspatchtest-44c9c4ed-d36b-4d87-b4ae-484054b7d594-427" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":17,"completed":7,"skipped":1309,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:49:42.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 01:49:42.975: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 01:49:42.983: INFO: Waiting for terminating namespaces to be deleted... Oct 30 01:49:42.985: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 01:49:42.997: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 01:49:42.997: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:49:42.997: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:49:42.997: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 01:49:42.997: INFO: Container discover ready: false, restart count 0 Oct 30 01:49:42.997: INFO: Container init ready: false, restart count 0 Oct 30 01:49:42.997: INFO: Container install ready: false, restart count 0 Oct 30 01:49:42.997: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 01:49:42.997: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:49:42.997: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 01:49:42.997: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:49:42.997: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 01:49:42.997: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:49:42.997: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 01:49:42.997: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 01:49:42.997: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 01:49:42.997: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:49:42.997: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 01:49:42.997: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:49:42.997: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 01:49:42.997: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:49:42.997: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 01:49:42.997: INFO: Container collectd ready: true, restart count 0 Oct 30 01:49:42.997: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:49:42.997: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:49:42.997: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 01:49:42.997: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:49:42.997: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:49:42.997: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 01:49:42.997: INFO: Container config-reloader ready: true, restart count 0 Oct 30 01:49:42.997: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 01:49:42.997: INFO: Container grafana ready: true, restart count 0 Oct 30 01:49:42.997: INFO: Container prometheus ready: true, restart count 1 Oct 30 01:49:42.997: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 01:49:43.009: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 01:49:43.009: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:49:43.009: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:49:43.009: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 01:49:43.009: INFO: Container discover ready: false, restart count 0 Oct 30 01:49:43.009: INFO: Container init ready: false, restart count 0 Oct 30 01:49:43.009: INFO: Container install ready: false, restart count 0 Oct 30 01:49:43.009: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 01:49:43.009: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 01:49:43.009: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 01:49:43.009: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 01:49:43.009: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 01:49:43.009: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:49:43.009: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 01:49:43.009: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:49:43.009: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 01:49:43.009: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 01:49:43.009: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 01:49:43.009: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:49:43.009: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 01:49:43.009: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:49:43.009: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 01:49:43.009: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:49:43.009: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 01:49:43.009: INFO: Container collectd ready: true, restart count 0 Oct 30 01:49:43.009: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:49:43.009: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:49:43.009: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 01:49:43.009: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:49:43.009: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:49:43.009: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 01:49:43.009: INFO: Container tas-extender ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-1acb1fc6-59ed-4b76-ab33-31beae15decf 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.10.190.208 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-1acb1fc6-59ed-4b76-ab33-31beae15decf off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-1acb1fc6-59ed-4b76-ab33-31beae15decf [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:54:51.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-101" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.156 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":17,"completed":8,"skipped":1775,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:54:51.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 30 01:54:51.189: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:51.189: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:51.189: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:51.196: INFO: Number of nodes with available pods: 0 Oct 30 01:54:51.196: INFO: Node node1 is running more than one daemon pod Oct 30 01:54:52.203: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:52.203: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:52.203: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:52.206: INFO: Number of nodes with available pods: 0 Oct 30 01:54:52.206: INFO: Node node1 is running more than one daemon pod Oct 30 01:54:53.203: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:53.203: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:53.203: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:53.206: INFO: Number of nodes with available pods: 0 Oct 30 01:54:53.206: INFO: Node node1 is running more than one daemon pod Oct 30 01:54:54.204: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:54.204: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:54.204: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:54.208: INFO: Number of nodes with available pods: 1 Oct 30 01:54:54.208: INFO: Node node1 is running more than one daemon pod Oct 30 01:54:55.202: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:55.202: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:55.202: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:55.206: INFO: Number of nodes with available pods: 2 Oct 30 01:54:55.206: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Oct 30 01:54:55.223: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:55.223: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:55.223: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:55.225: INFO: Number of nodes with available pods: 1 Oct 30 01:54:55.225: INFO: Node node2 is running more than one daemon pod Oct 30 01:54:56.230: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:56.230: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:56.230: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:56.233: INFO: Number of nodes with available pods: 1 Oct 30 01:54:56.233: INFO: Node node2 is running more than one daemon pod Oct 30 01:54:57.232: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:57.232: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:57.233: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:57.235: INFO: Number of nodes with available pods: 1 Oct 30 01:54:57.235: INFO: Node node2 is running more than one daemon pod Oct 30 01:54:58.233: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:58.233: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:58.234: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:58.237: INFO: Number of nodes with available pods: 1 Oct 30 01:54:58.237: INFO: Node node2 is running more than one daemon pod Oct 30 01:54:59.232: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:59.232: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:59.232: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:54:59.235: INFO: Number of nodes with available pods: 2 Oct 30 01:54:59.235: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4088, will wait for the garbage collector to delete the pods Oct 30 01:54:59.297: INFO: Deleting DaemonSet.extensions daemon-set took: 3.311892ms Oct 30 01:54:59.398: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.020154ms Oct 30 01:55:12.900: INFO: Number of nodes with available pods: 0 Oct 30 01:55:12.901: INFO: Number of running nodes: 0, number of available pods: 0 Oct 30 01:55:12.903: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"107002"},"items":null} Oct 30 01:55:12.905: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"107002"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:55:12.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4088" for this suite. • [SLOW TEST:21.794 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":17,"completed":9,"skipped":3483,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:55:12.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 01:55:12.954: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 01:55:12.963: INFO: Waiting for terminating namespaces to be deleted... Oct 30 01:55:12.966: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 01:55:12.974: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 01:55:12.974: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:55:12.974: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:55:12.974: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 01:55:12.974: INFO: Container discover ready: false, restart count 0 Oct 30 01:55:12.974: INFO: Container init ready: false, restart count 0 Oct 30 01:55:12.974: INFO: Container install ready: false, restart count 0 Oct 30 01:55:12.974: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 01:55:12.974: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:55:12.974: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 01:55:12.974: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:55:12.974: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 01:55:12.974: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:55:12.974: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 01:55:12.974: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 01:55:12.974: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 01:55:12.974: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:55:12.974: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 01:55:12.974: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:55:12.974: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 01:55:12.974: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:55:12.974: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 01:55:12.974: INFO: Container collectd ready: true, restart count 0 Oct 30 01:55:12.974: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:55:12.974: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:55:12.974: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 01:55:12.974: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:55:12.974: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:55:12.974: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 01:55:12.974: INFO: Container config-reloader ready: true, restart count 0 Oct 30 01:55:12.974: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 01:55:12.974: INFO: Container grafana ready: true, restart count 0 Oct 30 01:55:12.974: INFO: Container prometheus ready: true, restart count 1 Oct 30 01:55:12.974: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 01:55:12.984: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 01:55:12.984: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:55:12.984: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:55:12.984: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 01:55:12.984: INFO: Container discover ready: false, restart count 0 Oct 30 01:55:12.984: INFO: Container init ready: false, restart count 0 Oct 30 01:55:12.984: INFO: Container install ready: false, restart count 0 Oct 30 01:55:12.984: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 01:55:12.984: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 01:55:12.984: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 01:55:12.984: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 01:55:12.984: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 01:55:12.984: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:55:12.984: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 01:55:12.984: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:55:12.984: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 01:55:12.984: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 01:55:12.984: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 01:55:12.984: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:55:12.984: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 01:55:12.984: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:55:12.984: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 01:55:12.984: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:55:12.984: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 01:55:12.984: INFO: Container collectd ready: true, restart count 0 Oct 30 01:55:12.984: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:55:12.984: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:55:12.984: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 01:55:12.984: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:55:12.984: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:55:12.984: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 01:55:12.984: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16b2ac2a73d20fe3], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:55:14.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8960" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":17,"completed":10,"skipped":3514,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:55:14.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 01:55:14.057: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 01:55:14.068: INFO: Waiting for terminating namespaces to be deleted... Oct 30 01:55:14.072: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 01:55:14.091: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 01:55:14.091: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:55:14.091: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:55:14.091: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 01:55:14.091: INFO: Container discover ready: false, restart count 0 Oct 30 01:55:14.091: INFO: Container init ready: false, restart count 0 Oct 30 01:55:14.091: INFO: Container install ready: false, restart count 0 Oct 30 01:55:14.091: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 01:55:14.091: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:55:14.091: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 01:55:14.091: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:55:14.091: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 01:55:14.091: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:55:14.091: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 01:55:14.091: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 01:55:14.091: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 01:55:14.091: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:55:14.091: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 01:55:14.092: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:55:14.092: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 01:55:14.092: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:55:14.092: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 01:55:14.092: INFO: Container collectd ready: true, restart count 0 Oct 30 01:55:14.092: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:55:14.092: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:55:14.092: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 01:55:14.092: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:55:14.092: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:55:14.092: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 01:55:14.092: INFO: Container config-reloader ready: true, restart count 0 Oct 30 01:55:14.092: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 01:55:14.092: INFO: Container grafana ready: true, restart count 0 Oct 30 01:55:14.092: INFO: Container prometheus ready: true, restart count 1 Oct 30 01:55:14.092: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 01:55:14.101: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 01:55:14.101: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:55:14.101: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:55:14.101: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 01:55:14.101: INFO: Container discover ready: false, restart count 0 Oct 30 01:55:14.101: INFO: Container init ready: false, restart count 0 Oct 30 01:55:14.101: INFO: Container install ready: false, restart count 0 Oct 30 01:55:14.101: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 01:55:14.101: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 01:55:14.101: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 01:55:14.101: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 01:55:14.101: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 01:55:14.101: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:55:14.101: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 01:55:14.101: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:55:14.101: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 01:55:14.101: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 01:55:14.101: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 01:55:14.101: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:55:14.101: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 01:55:14.101: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:55:14.101: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 01:55:14.101: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:55:14.101: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 01:55:14.101: INFO: Container collectd ready: true, restart count 0 Oct 30 01:55:14.101: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:55:14.101: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:55:14.101: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 01:55:14.101: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:55:14.101: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:55:14.101: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 01:55:14.101: INFO: Container tas-extender ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: verifying the node has the label node node1 STEP: verifying the node has the label node node2 Oct 30 01:55:20.247: INFO: Pod cmk-89lqq requesting resource cpu=0m on Node node1 Oct 30 01:55:20.247: INFO: Pod cmk-8bpbf requesting resource cpu=0m on Node node2 Oct 30 01:55:20.247: INFO: Pod cmk-webhook-6c9d5f8578-ffk66 requesting resource cpu=0m on Node node2 Oct 30 01:55:20.247: INFO: Pod kube-flannel-f6s5v requesting resource cpu=150m on Node node2 Oct 30 01:55:20.247: INFO: Pod kube-flannel-phg88 requesting resource cpu=150m on Node node1 Oct 30 01:55:20.247: INFO: Pod kube-multus-ds-amd64-68wrz requesting resource cpu=100m on Node node1 Oct 30 01:55:20.247: INFO: Pod kube-multus-ds-amd64-7tvbl requesting resource cpu=100m on Node node2 Oct 30 01:55:20.247: INFO: Pod kube-proxy-76285 requesting resource cpu=0m on Node node2 Oct 30 01:55:20.247: INFO: Pod kube-proxy-z5hqt requesting resource cpu=0m on Node node1 Oct 30 01:55:20.247: INFO: Pod kubernetes-dashboard-785dcbb76d-pbjjt requesting resource cpu=50m on Node node2 Oct 30 01:55:20.247: INFO: Pod kubernetes-metrics-scraper-5558854cb-5rmjw requesting resource cpu=0m on Node node1 Oct 30 01:55:20.247: INFO: Pod nginx-proxy-node1 requesting resource cpu=25m on Node node1 Oct 30 01:55:20.247: INFO: Pod nginx-proxy-node2 requesting resource cpu=25m on Node node2 Oct 30 01:55:20.247: INFO: Pod node-feature-discovery-worker-h6lcp requesting resource cpu=0m on Node node2 Oct 30 01:55:20.247: INFO: Pod node-feature-discovery-worker-w5vdb requesting resource cpu=0m on Node node1 Oct 30 01:55:20.247: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg requesting resource cpu=0m on Node node2 Oct 30 01:55:20.247: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-t789r requesting resource cpu=0m on Node node1 Oct 30 01:55:20.247: INFO: Pod collectd-d45rv requesting resource cpu=0m on Node node1 Oct 30 01:55:20.247: INFO: Pod collectd-flvhl requesting resource cpu=0m on Node node2 Oct 30 01:55:20.247: INFO: Pod node-exporter-256wm requesting resource cpu=112m on Node node1 Oct 30 01:55:20.247: INFO: Pod node-exporter-r77s4 requesting resource cpu=112m on Node node2 Oct 30 01:55:20.247: INFO: Pod prometheus-k8s-0 requesting resource cpu=200m on Node node1 Oct 30 01:55:20.247: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-989mh requesting resource cpu=0m on Node node2 STEP: Starting Pods to consume most of the cluster CPU. Oct 30 01:55:20.247: INFO: Creating a pod which consumes cpu=53489m on Node node1 Oct 30 01:55:20.258: INFO: Creating a pod which consumes cpu=53594m on Node node2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-302b3772-7ce6-4c1a-9caa-b3adddddfdfb.16b2ac2c23804696], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7873/filler-pod-302b3772-7ce6-4c1a-9caa-b3adddddfdfb to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-302b3772-7ce6-4c1a-9caa-b3adddddfdfb.16b2ac2c853920e3], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-302b3772-7ce6-4c1a-9caa-b3adddddfdfb.16b2ac2c9903ff51], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 332.055717ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-302b3772-7ce6-4c1a-9caa-b3adddddfdfb.16b2ac2c9f2170db], Reason = [Created], Message = [Created container filler-pod-302b3772-7ce6-4c1a-9caa-b3adddddfdfb] STEP: Considering event: Type = [Normal], Name = [filler-pod-302b3772-7ce6-4c1a-9caa-b3adddddfdfb.16b2ac2ca6863674], Reason = [Started], Message = [Started container filler-pod-302b3772-7ce6-4c1a-9caa-b3adddddfdfb] STEP: Considering event: Type = [Normal], Name = [filler-pod-6dd508fb-17cd-46fe-8c9a-d22cd453527c.16b2ac2c23eeec06], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7873/filler-pod-6dd508fb-17cd-46fe-8c9a-d22cd453527c to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-6dd508fb-17cd-46fe-8c9a-d22cd453527c.16b2ac2c7d452046], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-6dd508fb-17cd-46fe-8c9a-d22cd453527c.16b2ac2c8f3ad20e], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 301.306351ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-6dd508fb-17cd-46fe-8c9a-d22cd453527c.16b2ac2c952d11ea], Reason = [Created], Message = [Created container filler-pod-6dd508fb-17cd-46fe-8c9a-d22cd453527c] STEP: Considering event: Type = [Normal], Name = [filler-pod-6dd508fb-17cd-46fe-8c9a-d22cd453527c.16b2ac2c9cc48c74], Reason = [Started], Message = [Started container filler-pod-6dd508fb-17cd-46fe-8c9a-d22cd453527c] STEP: Considering event: Type = [Warning], Name = [additional-pod.16b2ac2d13d07439], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: removing the label node off the node node1 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node node2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:55:25.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7873" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:11.297 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":17,"completed":11,"skipped":3608,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:55:25.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 01:55:25.355: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 01:55:25.363: INFO: Waiting for terminating namespaces to be deleted... Oct 30 01:55:25.365: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 01:55:25.372: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 01:55:25.372: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:55:25.372: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:55:25.372: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 01:55:25.372: INFO: Container discover ready: false, restart count 0 Oct 30 01:55:25.372: INFO: Container init ready: false, restart count 0 Oct 30 01:55:25.372: INFO: Container install ready: false, restart count 0 Oct 30 01:55:25.372: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 01:55:25.372: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 01:55:25.372: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 01:55:25.372: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:55:25.372: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 01:55:25.372: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:55:25.372: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 01:55:25.372: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 01:55:25.372: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 01:55:25.372: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:55:25.372: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 01:55:25.373: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:55:25.373: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 01:55:25.373: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:55:25.373: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 01:55:25.373: INFO: Container collectd ready: true, restart count 0 Oct 30 01:55:25.373: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:55:25.373: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:55:25.373: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 01:55:25.373: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:55:25.373: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:55:25.373: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 01:55:25.373: INFO: Container config-reloader ready: true, restart count 0 Oct 30 01:55:25.373: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 01:55:25.373: INFO: Container grafana ready: true, restart count 0 Oct 30 01:55:25.373: INFO: Container prometheus ready: true, restart count 1 Oct 30 01:55:25.373: INFO: filler-pod-302b3772-7ce6-4c1a-9caa-b3adddddfdfb from sched-pred-7873 started at 2021-10-30 01:55:20 +0000 UTC (1 container statuses recorded) Oct 30 01:55:25.373: INFO: Container filler-pod-302b3772-7ce6-4c1a-9caa-b3adddddfdfb ready: true, restart count 0 Oct 30 01:55:25.373: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 01:55:25.382: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 01:55:25.382: INFO: Container nodereport ready: true, restart count 0 Oct 30 01:55:25.382: INFO: Container reconcile ready: true, restart count 0 Oct 30 01:55:25.382: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 01:55:25.382: INFO: Container discover ready: false, restart count 0 Oct 30 01:55:25.382: INFO: Container init ready: false, restart count 0 Oct 30 01:55:25.382: INFO: Container install ready: false, restart count 0 Oct 30 01:55:25.382: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 01:55:25.382: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 01:55:25.382: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 01:55:25.382: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 01:55:25.382: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 01:55:25.382: INFO: Container kube-multus ready: true, restart count 1 Oct 30 01:55:25.382: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 01:55:25.382: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 01:55:25.382: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 01:55:25.382: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 01:55:25.382: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 01:55:25.382: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 01:55:25.382: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 01:55:25.382: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 01:55:25.382: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 01:55:25.382: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 01:55:25.382: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 01:55:25.382: INFO: Container collectd ready: true, restart count 0 Oct 30 01:55:25.382: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 01:55:25.382: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 01:55:25.382: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 01:55:25.382: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 01:55:25.382: INFO: Container node-exporter ready: true, restart count 0 Oct 30 01:55:25.382: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 01:55:25.382: INFO: Container tas-extender ready: true, restart count 0 Oct 30 01:55:25.382: INFO: filler-pod-6dd508fb-17cd-46fe-8c9a-d22cd453527c from sched-pred-7873 started at 2021-10-30 01:55:20 +0000 UTC (1 container statuses recorded) Oct 30 01:55:25.382: INFO: Container filler-pod-6dd508fb-17cd-46fe-8c9a-d22cd453527c ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b4d3f701-f505-4093-95b2-7c49d54dbe0a 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-b4d3f701-f505-4093-95b2-7c49d54dbe0a off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-b4d3f701-f505-4093-95b2-7c49d54dbe0a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:55:33.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-324" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.129 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":17,"completed":12,"skipped":3715,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:55:33.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:55:33.506: INFO: Create a RollingUpdate DaemonSet Oct 30 01:55:33.511: INFO: Check that daemon pods launch on every node of the cluster Oct 30 01:55:33.517: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:33.517: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:33.517: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:33.523: INFO: Number of nodes with available pods: 0 Oct 30 01:55:33.523: INFO: Node node1 is running more than one daemon pod Oct 30 01:55:34.529: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:34.529: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:34.529: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:34.532: INFO: Number of nodes with available pods: 0 Oct 30 01:55:34.532: INFO: Node node1 is running more than one daemon pod Oct 30 01:55:35.531: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:35.531: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:35.531: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:35.534: INFO: Number of nodes with available pods: 0 Oct 30 01:55:35.534: INFO: Node node1 is running more than one daemon pod Oct 30 01:55:36.529: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:36.529: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:36.529: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:36.533: INFO: Number of nodes with available pods: 1 Oct 30 01:55:36.533: INFO: Node node1 is running more than one daemon pod Oct 30 01:55:37.530: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:37.530: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:37.530: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:37.534: INFO: Number of nodes with available pods: 2 Oct 30 01:55:37.534: INFO: Number of running nodes: 2, number of available pods: 2 Oct 30 01:55:37.534: INFO: Update the DaemonSet to trigger a rollout Oct 30 01:55:37.539: INFO: Updating DaemonSet daemon-set Oct 30 01:55:53.557: INFO: Roll back the DaemonSet before rollout is complete Oct 30 01:55:53.564: INFO: Updating DaemonSet daemon-set Oct 30 01:55:53.564: INFO: Make sure DaemonSet rollback is complete Oct 30 01:55:53.567: INFO: Wrong image for pod: daemon-set-5fzg7. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Oct 30 01:55:53.567: INFO: Pod daemon-set-5fzg7 is not available Oct 30 01:55:53.572: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:53.572: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:53.572: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:54.583: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:54.583: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:54.583: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:55.583: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:55.583: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:55.584: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:56.582: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:56.582: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:56.582: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:57.582: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:57.582: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:57.582: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:58.583: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:58.583: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:58.583: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:59.583: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:59.583: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:55:59.583: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:56:00.585: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:56:00.585: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:56:00.585: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:56:01.581: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:56:01.581: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:56:01.581: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:56:02.583: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:56:02.584: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:56:02.584: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:56:03.579: INFO: Pod daemon-set-tz4qp is not available Oct 30 01:56:03.584: INFO: DaemonSet pods can't tolerate node master1 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:56:03.584: INFO: DaemonSet pods can't tolerate node master2 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 30 01:56:03.584: INFO: DaemonSet pods can't tolerate node master3 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1422, will wait for the garbage collector to delete the pods Oct 30 01:56:03.649: INFO: Deleting DaemonSet.extensions daemon-set took: 4.543207ms Oct 30 01:56:03.749: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.324422ms Oct 30 01:56:13.252: INFO: Number of nodes with available pods: 0 Oct 30 01:56:13.252: INFO: Number of running nodes: 0, number of available pods: 0 Oct 30 01:56:13.254: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"107397"},"items":null} Oct 30 01:56:13.257: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"107397"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:56:13.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1422" for this suite. • [SLOW TEST:39.813 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":17,"completed":13,"skipped":3831,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:56:13.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Oct 30 01:56:13.320: INFO: Waiting up to 1m0s for all nodes to be ready Oct 30 01:57:13.376: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:57:13.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Oct 30 01:57:17.431: INFO: found a healthy node: node2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Oct 30 01:57:29.490: INFO: pods created so far: [1 1 1] Oct 30 01:57:29.490: INFO: length of pods created so far: 3 Oct 30 01:57:47.509: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:57:54.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-4342" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:57:54.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7815" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:101.297 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":17,"completed":14,"skipped":4840,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:57:54.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:58:00.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3303" for this suite. STEP: Destroying namespace "nsdeletetest-4274" for this suite. Oct 30 01:58:00.671: INFO: Namespace nsdeletetest-4274 was already deleted STEP: Destroying namespace "nsdeletetest-4450" for this suite. • [SLOW TEST:6.090 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":17,"completed":15,"skipped":4846,"failed":0} SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:58:00.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Oct 30 01:58:00.712: INFO: Waiting up to 1m0s for all nodes to be ready Oct 30 01:59:00.767: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Oct 30 01:59:00.794: INFO: Created pod: pod0-sched-preemption-low-priority Oct 30 01:59:00.814: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 01:59:36.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-455" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:96.223 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":17,"completed":16,"skipped":4856,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 01:59:36.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 02:00:07.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5768" for this suite. STEP: Destroying namespace "nsdeletetest-1580" for this suite. Oct 30 02:00:08.002: INFO: Namespace nsdeletetest-1580 was already deleted STEP: Destroying namespace "nsdeletetest-9700" for this suite. • [SLOW TEST:31.102 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":17,"completed":17,"skipped":4925,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 30 02:00:08.017: INFO: Running AfterSuite actions on all nodes Oct 30 02:00:08.017: INFO: Running AfterSuite actions on node 1 Oct 30 02:00:08.017: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":17,"skipped":5753,"failed":0} Ran 17 of 5770 Specs in 942.139 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5753 Skipped PASS Ginkgo ran 1 suite in 15m43.508118374s Test Suite Passed