I0525 10:20:06.610978 17 e2e.go:129] Starting e2e run "f665906d-4cd4-4f8a-bb46-f74bae19a615" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1621938005 - Will randomize all specs Will run 17 of 5771 specs May 25 10:20:06.684: INFO: >>> kubeConfig: /root/.kube/config May 25 10:20:06.689: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 25 10:20:06.717: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 25 10:20:06.770: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 25 10:20:06.770: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 25 10:20:06.770: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 25 10:20:06.784: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) May 25 10:20:06.784: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 25 10:20:06.784: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) May 25 10:20:06.784: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 25 10:20:06.784: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) May 25 10:20:06.784: INFO: e2e test version: v1.21.1 May 25 10:20:06.785: INFO: kube-apiserver version: v1.21.1 May 25 10:20:06.785: INFO: >>> kubeConfig: /root/.kube/config May 25 10:20:06.792: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:20:06.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption W0525 10:20:07.483727 17 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 10:20:07.483: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 10:20:07.783: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 25 10:20:08.481: INFO: Waiting up to 1m0s for all nodes to be ready May 25 10:21:08.529: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:21:08.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 25 10:21:10.604: INFO: found a healthy node: v1.21-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:21:40.983: INFO: pods created so far: [1 1 1] May 25 10:21:40.983: INFO: length of pods created so far: 3 May 25 10:22:00.999: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:22:08.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-3978" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:22:08.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3287" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:121.289 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":17,"completed":1,"skipped":474,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:22:08.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:22:08.152: INFO: Create a RollingUpdate DaemonSet May 25 10:22:08.157: INFO: Check that daemon pods launch on every node of the cluster May 25 10:22:08.161: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:22:08.164: INFO: Number of nodes with available pods: 0 May 25 10:22:08.164: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:22:09.171: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:22:09.175: INFO: Number of nodes with available pods: 0 May 25 10:22:09.175: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:22:10.170: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:22:10.174: INFO: Number of nodes with available pods: 2 May 25 10:22:10.174: INFO: Number of running nodes: 2, number of available pods: 2 May 25 10:22:10.174: INFO: Update the DaemonSet to trigger a rollout May 25 10:22:10.185: INFO: Updating DaemonSet daemon-set May 25 10:22:14.202: INFO: Roll back the DaemonSet before rollout is complete May 25 10:22:14.210: INFO: Updating DaemonSet daemon-set May 25 10:22:14.210: INFO: Make sure DaemonSet rollback is complete May 25 10:22:14.213: INFO: Wrong image for pod: daemon-set-fq6zb. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. May 25 10:22:14.213: INFO: Pod daemon-set-fq6zb is not available May 25 10:22:14.217: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:22:15.226: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:22:16.226: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:22:17.227: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:22:18.229: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:22:19.228: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:22:20.229: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:22:21.232: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:22:22.227: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:22:23.227: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:22:24.228: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:22:25.222: INFO: Pod daemon-set-7ljrh is not available May 25 10:22:25.227: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3248, will wait for the garbage collector to delete the pods May 25 10:22:25.294: INFO: Deleting DaemonSet.extensions daemon-set took: 5.39078ms May 25 10:22:25.395: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.855994ms May 25 10:22:35.099: INFO: Number of nodes with available pods: 0 May 25 10:22:35.099: INFO: Number of running nodes: 0, number of available pods: 0 May 25 10:22:35.105: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"509115"},"items":null} May 25 10:22:35.108: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"509115"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:22:35.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3248" for this suite. • [SLOW TEST:27.033 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":17,"completed":2,"skipped":1109,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:22:35.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 25 10:22:35.669: INFO: Pod name wrapped-volume-race-e8f5f60c-538a-4a98-8dce-d008dbfc4b0a: Found 0 pods out of 5 May 25 10:22:40.680: INFO: Pod name wrapped-volume-race-e8f5f60c-538a-4a98-8dce-d008dbfc4b0a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e8f5f60c-538a-4a98-8dce-d008dbfc4b0a in namespace emptydir-wrapper-7220, will wait for the garbage collector to delete the pods May 25 10:22:50.774: INFO: Deleting ReplicationController wrapped-volume-race-e8f5f60c-538a-4a98-8dce-d008dbfc4b0a took: 6.213193ms May 25 10:22:50.874: INFO: Terminating ReplicationController wrapped-volume-race-e8f5f60c-538a-4a98-8dce-d008dbfc4b0a pods took: 100.25709ms STEP: Creating RC which spawns configmap-volume pods May 25 10:23:05.199: INFO: Pod name wrapped-volume-race-31671e4c-a13e-4486-b9e8-f57dccdb601f: Found 0 pods out of 5 May 25 10:23:10.388: INFO: Pod name wrapped-volume-race-31671e4c-a13e-4486-b9e8-f57dccdb601f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-31671e4c-a13e-4486-b9e8-f57dccdb601f in namespace emptydir-wrapper-7220, will wait for the garbage collector to delete the pods May 25 10:23:20.476: INFO: Deleting ReplicationController wrapped-volume-race-31671e4c-a13e-4486-b9e8-f57dccdb601f took: 6.377312ms May 25 10:23:20.576: INFO: Terminating ReplicationController wrapped-volume-race-31671e4c-a13e-4486-b9e8-f57dccdb601f pods took: 100.314695ms STEP: Creating RC which spawns configmap-volume pods May 25 10:23:25.596: INFO: Pod name wrapped-volume-race-d54c0ba8-98b8-4ced-9f81-de21e1f74018: Found 0 pods out of 5 May 25 10:23:30.605: INFO: Pod name wrapped-volume-race-d54c0ba8-98b8-4ced-9f81-de21e1f74018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d54c0ba8-98b8-4ced-9f81-de21e1f74018 in namespace emptydir-wrapper-7220, will wait for the garbage collector to delete the pods May 25 10:23:41.051: INFO: Deleting ReplicationController wrapped-volume-race-d54c0ba8-98b8-4ced-9f81-de21e1f74018 took: 5.713173ms May 25 10:23:41.151: INFO: Terminating ReplicationController wrapped-volume-race-d54c0ba8-98b8-4ced-9f81-de21e1f74018 pods took: 100.589604ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:23:55.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7220" for this suite. • [SLOW TEST:80.254 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":17,"completed":3,"skipped":1442,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:23:55.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 25 10:23:55.457: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:23:55.459: INFO: Number of nodes with available pods: 0 May 25 10:23:55.459: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:23:56.465: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:23:56.469: INFO: Number of nodes with available pods: 0 May 25 10:23:56.469: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:23:57.465: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:23:57.469: INFO: Number of nodes with available pods: 1 May 25 10:23:57.469: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:23:58.465: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:23:58.469: INFO: Number of nodes with available pods: 2 May 25 10:23:58.469: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 25 10:23:58.486: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:23:58.490: INFO: Number of nodes with available pods: 1 May 25 10:23:58.490: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:23:59.496: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:23:59.500: INFO: Number of nodes with available pods: 1 May 25 10:23:59.500: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:24:00.495: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:24:00.499: INFO: Number of nodes with available pods: 1 May 25 10:24:00.499: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:24:01.495: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:24:01.499: INFO: Number of nodes with available pods: 1 May 25 10:24:01.499: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:24:02.496: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:24:02.501: INFO: Number of nodes with available pods: 1 May 25 10:24:02.501: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:24:03.496: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:24:03.500: INFO: Number of nodes with available pods: 1 May 25 10:24:03.500: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:24:04.495: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:24:04.499: INFO: Number of nodes with available pods: 1 May 25 10:24:04.499: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:24:05.495: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:24:05.499: INFO: Number of nodes with available pods: 1 May 25 10:24:05.499: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:24:06.496: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:24:06.500: INFO: Number of nodes with available pods: 1 May 25 10:24:06.500: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:24:07.496: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:24:07.500: INFO: Number of nodes with available pods: 2 May 25 10:24:07.500: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5467, will wait for the garbage collector to delete the pods May 25 10:24:07.562: INFO: Deleting DaemonSet.extensions daemon-set took: 5.35552ms May 25 10:24:07.663: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.078531ms May 25 10:24:15.066: INFO: Number of nodes with available pods: 0 May 25 10:24:15.066: INFO: Number of running nodes: 0, number of available pods: 0 May 25 10:24:15.069: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"510339"},"items":null} May 25 10:24:15.071: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"510339"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:24:15.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5467" for this suite. • [SLOW TEST:19.783 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":17,"completed":4,"skipped":1822,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:24:15.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 25 10:24:15.227: INFO: Waiting up to 1m0s for all nodes to be ready May 25 10:25:15.278: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. May 25 10:25:15.484: INFO: Created pod: pod0-sched-preemption-low-priority May 25 10:25:15.881: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:25:38.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2788" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:82.991 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":17,"completed":5,"skipped":1896,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:25:38.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 25 10:25:38.226: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 25 10:25:38.235: INFO: Waiting for terminating namespaces to be deleted... May 25 10:25:38.238: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 25 10:25:38.246: INFO: coredns-558bd4d5db-46k4j from kube-system started at 2021-05-25 02:18:50 +0000 UTC (1 container statuses recorded) May 25 10:25:38.246: INFO: Container coredns ready: true, restart count 0 May 25 10:25:38.246: INFO: coredns-558bd4d5db-kff7s from kube-system started at 2021-05-25 02:18:50 +0000 UTC (1 container statuses recorded) May 25 10:25:38.246: INFO: Container coredns ready: true, restart count 0 May 25 10:25:38.246: INFO: create-loop-devs-zpb97 from kube-system started at 2021-05-25 02:04:35 +0000 UTC (1 container statuses recorded) May 25 10:25:38.246: INFO: Container loopdev ready: true, restart count 0 May 25 10:25:38.246: INFO: kindnet-64qsq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 10:25:38.246: INFO: Container kindnet-cni ready: true, restart count 0 May 25 10:25:38.246: INFO: kube-multus-ds-fnq4h from kube-system started at 2021-05-25 02:04:15 +0000 UTC (1 container statuses recorded) May 25 10:25:38.246: INFO: Container kube-multus ready: true, restart count 0 May 25 10:25:38.246: INFO: kube-proxy-pjm2c from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 10:25:38.246: INFO: Container kube-proxy ready: true, restart count 0 May 25 10:25:38.246: INFO: tune-sysctls-4ntcs from kube-system started at 2021-05-25 02:03:55 +0000 UTC (1 container statuses recorded) May 25 10:25:38.246: INFO: Container setsysctls ready: true, restart count 0 May 25 10:25:38.246: INFO: speaker-nljg8 from metallb-system started at 2021-05-25 02:03:55 +0000 UTC (1 container statuses recorded) May 25 10:25:38.246: INFO: Container speaker ready: true, restart count 0 May 25 10:25:38.246: INFO: preemptor-pod from sched-preemption-2788 started at 2021-05-25 10:25:35 +0000 UTC (1 container statuses recorded) May 25 10:25:38.246: INFO: Container preemptor-pod ready: true, restart count 0 May 25 10:25:38.246: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 25 10:25:38.255: INFO: create-loop-devs-lfj6m from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 10:25:38.255: INFO: Container loopdev ready: true, restart count 0 May 25 10:25:38.255: INFO: kindnet-5xbgn from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 10:25:38.255: INFO: Container kindnet-cni ready: true, restart count 0 May 25 10:25:38.255: INFO: kube-multus-ds-chmxd from kube-system started at 2021-05-24 17:25:29 +0000 UTC (1 container statuses recorded) May 25 10:25:38.255: INFO: Container kube-multus ready: true, restart count 1 May 25 10:25:38.255: INFO: kube-proxy-wg4wq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 10:25:38.255: INFO: Container kube-proxy ready: true, restart count 0 May 25 10:25:38.255: INFO: tune-sysctls-b7rgm from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 10:25:38.255: INFO: Container setsysctls ready: true, restart count 0 May 25 10:25:38.255: INFO: dashboard-metrics-scraper-856586f554-l66m5 from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 10:25:38.255: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 25 10:25:38.256: INFO: kubernetes-dashboard-78c79f97b4-k777m from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 10:25:38.256: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 25 10:25:38.256: INFO: controller-675995489c-x7gj2 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 10:25:38.256: INFO: Container controller ready: true, restart count 0 May 25 10:25:38.256: INFO: speaker-lw6f6 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 10:25:38.256: INFO: Container speaker ready: true, restart count 0 May 25 10:25:38.256: INFO: contour-74948c9879-n2262 from projectcontour started at 2021-05-24 17:25:31 +0000 UTC (1 container statuses recorded) May 25 10:25:38.256: INFO: Container contour ready: true, restart count 0 May 25 10:25:38.256: INFO: contour-74948c9879-w22pr from projectcontour started at 2021-05-24 19:58:28 +0000 UTC (1 container statuses recorded) May 25 10:25:38.256: INFO: Container contour ready: true, restart count 0 May 25 10:25:38.256: INFO: pod1-sched-preemption-medium-priority from sched-preemption-2788 started at 2021-05-25 10:25:18 +0000 UTC (1 container statuses recorded) May 25 10:25:38.256: INFO: Container pod1-sched-preemption-medium-priority ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-7ebb551a-8e3a-48ce-90c4-01fa0c722ae0 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.18.0.4 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-7ebb551a-8e3a-48ce-90c4-01fa0c722ae0 off the node v1.21-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-7ebb551a-8e3a-48ce-90c4-01fa0c722ae0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:30:42.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1961" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:304.171 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":17,"completed":6,"skipped":3058,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:30:42.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:30:42.420: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 25 10:30:42.428: INFO: Number of nodes with available pods: 0 May 25 10:30:42.428: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 25 10:30:42.446: INFO: Number of nodes with available pods: 0 May 25 10:30:42.446: INFO: Node v1.21-worker2 is running more than one daemon pod May 25 10:30:43.450: INFO: Number of nodes with available pods: 0 May 25 10:30:43.450: INFO: Node v1.21-worker2 is running more than one daemon pod May 25 10:30:44.450: INFO: Number of nodes with available pods: 1 May 25 10:30:44.450: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 25 10:30:44.469: INFO: Number of nodes with available pods: 1 May 25 10:30:44.469: INFO: Number of running nodes: 0, number of available pods: 1 May 25 10:30:45.473: INFO: Number of nodes with available pods: 0 May 25 10:30:45.473: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 25 10:30:45.483: INFO: Number of nodes with available pods: 0 May 25 10:30:45.483: INFO: Node v1.21-worker2 is running more than one daemon pod May 25 10:30:46.488: INFO: Number of nodes with available pods: 0 May 25 10:30:46.488: INFO: Node v1.21-worker2 is running more than one daemon pod May 25 10:30:47.488: INFO: Number of nodes with available pods: 0 May 25 10:30:47.488: INFO: Node v1.21-worker2 is running more than one daemon pod May 25 10:30:48.489: INFO: Number of nodes with available pods: 0 May 25 10:30:48.489: INFO: Node v1.21-worker2 is running more than one daemon pod May 25 10:30:49.488: INFO: Number of nodes with available pods: 0 May 25 10:30:49.488: INFO: Node v1.21-worker2 is running more than one daemon pod May 25 10:30:50.489: INFO: Number of nodes with available pods: 0 May 25 10:30:50.489: INFO: Node v1.21-worker2 is running more than one daemon pod May 25 10:30:51.488: INFO: Number of nodes with available pods: 0 May 25 10:30:51.488: INFO: Node v1.21-worker2 is running more than one daemon pod May 25 10:30:52.489: INFO: Number of nodes with available pods: 0 May 25 10:30:52.489: INFO: Node v1.21-worker2 is running more than one daemon pod May 25 10:30:53.489: INFO: Number of nodes with available pods: 0 May 25 10:30:53.489: INFO: Node v1.21-worker2 is running more than one daemon pod May 25 10:30:54.489: INFO: Number of nodes with available pods: 0 May 25 10:30:54.489: INFO: Node v1.21-worker2 is running more than one daemon pod May 25 10:30:55.488: INFO: Number of nodes with available pods: 0 May 25 10:30:55.488: INFO: Node v1.21-worker2 is running more than one daemon pod May 25 10:30:56.489: INFO: Number of nodes with available pods: 0 May 25 10:30:56.489: INFO: Node v1.21-worker2 is running more than one daemon pod May 25 10:30:57.487: INFO: Number of nodes with available pods: 1 May 25 10:30:57.488: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5296, will wait for the garbage collector to delete the pods May 25 10:30:57.553: INFO: Deleting DaemonSet.extensions daemon-set took: 5.129869ms May 25 10:30:57.654: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.167445ms May 25 10:31:05.558: INFO: Number of nodes with available pods: 0 May 25 10:31:05.558: INFO: Number of running nodes: 0, number of available pods: 0 May 25 10:31:05.561: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"511564"},"items":null} May 25 10:31:05.564: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"511564"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:31:05.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5296" for this suite. • [SLOW TEST:23.227 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":17,"completed":7,"skipped":3628,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:31:05.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 25 10:31:05.661: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:31:05.664: INFO: Number of nodes with available pods: 0 May 25 10:31:05.664: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:31:06.671: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:31:06.675: INFO: Number of nodes with available pods: 0 May 25 10:31:06.675: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:31:07.682: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:31:07.687: INFO: Number of nodes with available pods: 2 May 25 10:31:07.687: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 25 10:31:07.796: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:31:07.800: INFO: Number of nodes with available pods: 1 May 25 10:31:07.800: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:31:08.804: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:31:08.808: INFO: Number of nodes with available pods: 1 May 25 10:31:08.808: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:31:09.805: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:31:09.809: INFO: Number of nodes with available pods: 2 May 25 10:31:09.809: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1483, will wait for the garbage collector to delete the pods May 25 10:31:09.875: INFO: Deleting DaemonSet.extensions daemon-set took: 5.428243ms May 25 10:31:10.076: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.922474ms May 25 10:31:25.681: INFO: Number of nodes with available pods: 0 May 25 10:31:25.681: INFO: Number of running nodes: 0, number of available pods: 0 May 25 10:31:25.685: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"511700"},"items":null} May 25 10:31:25.778: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"511701"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:31:25.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1483" for this suite. • [SLOW TEST:20.203 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":17,"completed":8,"skipped":3736,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:31:25.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:31:32.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6400" for this suite. STEP: Destroying namespace "nsdeletetest-1079" for this suite. May 25 10:31:32.129: INFO: Namespace nsdeletetest-1079 was already deleted STEP: Destroying namespace "nsdeletetest-3404" for this suite. • [SLOW TEST:6.327 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":17,"completed":9,"skipped":4179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:31:32.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 25 10:31:32.177: INFO: Waiting up to 1m0s for all nodes to be ready May 25 10:32:32.222: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:32:32.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:32:32.272: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. May 25 10:32:32.275: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:32:32.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-116" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:32:32.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-517" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.220 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":17,"completed":10,"skipped":4298,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:32:32.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 25 10:32:32.392: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 25 10:32:32.400: INFO: Waiting for terminating namespaces to be deleted... May 25 10:32:32.403: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 25 10:32:32.412: INFO: coredns-558bd4d5db-46k4j from kube-system started at 2021-05-25 02:18:50 +0000 UTC (1 container statuses recorded) May 25 10:32:32.412: INFO: Container coredns ready: true, restart count 0 May 25 10:32:32.412: INFO: coredns-558bd4d5db-kff7s from kube-system started at 2021-05-25 02:18:50 +0000 UTC (1 container statuses recorded) May 25 10:32:32.412: INFO: Container coredns ready: true, restart count 0 May 25 10:32:32.412: INFO: create-loop-devs-zpb97 from kube-system started at 2021-05-25 02:04:35 +0000 UTC (1 container statuses recorded) May 25 10:32:32.412: INFO: Container loopdev ready: true, restart count 0 May 25 10:32:32.412: INFO: kindnet-64qsq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 10:32:32.412: INFO: Container kindnet-cni ready: true, restart count 0 May 25 10:32:32.412: INFO: kube-multus-ds-fnq4h from kube-system started at 2021-05-25 02:04:15 +0000 UTC (1 container statuses recorded) May 25 10:32:32.412: INFO: Container kube-multus ready: true, restart count 0 May 25 10:32:32.412: INFO: kube-proxy-pjm2c from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 10:32:32.412: INFO: Container kube-proxy ready: true, restart count 0 May 25 10:32:32.412: INFO: tune-sysctls-4ntcs from kube-system started at 2021-05-25 02:03:55 +0000 UTC (1 container statuses recorded) May 25 10:32:32.412: INFO: Container setsysctls ready: true, restart count 0 May 25 10:32:32.412: INFO: speaker-nljg8 from metallb-system started at 2021-05-25 02:03:55 +0000 UTC (1 container statuses recorded) May 25 10:32:32.412: INFO: Container speaker ready: true, restart count 0 May 25 10:32:32.412: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 25 10:32:32.422: INFO: create-loop-devs-lfj6m from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 10:32:32.422: INFO: Container loopdev ready: true, restart count 0 May 25 10:32:32.422: INFO: kindnet-5xbgn from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 10:32:32.422: INFO: Container kindnet-cni ready: true, restart count 0 May 25 10:32:32.422: INFO: kube-multus-ds-chmxd from kube-system started at 2021-05-24 17:25:29 +0000 UTC (1 container statuses recorded) May 25 10:32:32.422: INFO: Container kube-multus ready: true, restart count 1 May 25 10:32:32.422: INFO: kube-proxy-wg4wq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 10:32:32.422: INFO: Container kube-proxy ready: true, restart count 0 May 25 10:32:32.422: INFO: tune-sysctls-b7rgm from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 10:32:32.422: INFO: Container setsysctls ready: true, restart count 0 May 25 10:32:32.422: INFO: dashboard-metrics-scraper-856586f554-l66m5 from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 10:32:32.422: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 25 10:32:32.422: INFO: kubernetes-dashboard-78c79f97b4-k777m from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 10:32:32.422: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 25 10:32:32.422: INFO: controller-675995489c-x7gj2 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 10:32:32.422: INFO: Container controller ready: true, restart count 0 May 25 10:32:32.422: INFO: speaker-lw6f6 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 10:32:32.422: INFO: Container speaker ready: true, restart count 0 May 25 10:32:32.422: INFO: contour-74948c9879-n2262 from projectcontour started at 2021-05-24 17:25:31 +0000 UTC (1 container statuses recorded) May 25 10:32:32.422: INFO: Container contour ready: true, restart count 0 May 25 10:32:32.422: INFO: contour-74948c9879-w22pr from projectcontour started at 2021-05-24 19:58:28 +0000 UTC (1 container statuses recorded) May 25 10:32:32.422: INFO: Container contour ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d25697eb-5557-4dc4-9f06-5e27d9bdc6b0 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-d25697eb-5557-4dc4-9f06-5e27d9bdc6b0 off the node v1.21-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-d25697eb-5557-4dc4-9f06-5e27d9bdc6b0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:32:36.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6558" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":17,"completed":11,"skipped":4576,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:32:36.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 25 10:32:36.703: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 25 10:32:36.713: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:36.716: INFO: Number of nodes with available pods: 0 May 25 10:32:36.716: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:32:37.721: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:37.724: INFO: Number of nodes with available pods: 0 May 25 10:32:37.724: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:32:38.881: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:38.885: INFO: Number of nodes with available pods: 0 May 25 10:32:38.885: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:32:39.721: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:39.726: INFO: Number of nodes with available pods: 0 May 25 10:32:39.726: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:32:40.722: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:40.725: INFO: Number of nodes with available pods: 2 May 25 10:32:40.725: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 25 10:32:40.752: INFO: Wrong image for pod: daemon-set-2rpg4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 25 10:32:40.752: INFO: Wrong image for pod: daemon-set-mj49v. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 25 10:32:40.756: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:41.761: INFO: Wrong image for pod: daemon-set-mj49v. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 25 10:32:41.765: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:42.761: INFO: Wrong image for pod: daemon-set-mj49v. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 25 10:32:42.765: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:43.761: INFO: Wrong image for pod: daemon-set-mj49v. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 25 10:32:43.765: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:44.879: INFO: Wrong image for pod: daemon-set-mj49v. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 25 10:32:44.879: INFO: Pod daemon-set-w57vd is not available May 25 10:32:44.885: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:45.761: INFO: Wrong image for pod: daemon-set-mj49v. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 25 10:32:45.761: INFO: Pod daemon-set-w57vd is not available May 25 10:32:45.766: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:46.761: INFO: Wrong image for pod: daemon-set-mj49v. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 25 10:32:46.761: INFO: Pod daemon-set-w57vd is not available May 25 10:32:46.765: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:47.766: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:48.767: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:49.766: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:50.766: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:51.765: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:52.766: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:53.767: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:54.765: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:55.780: INFO: Pod daemon-set-rnd9x is not available May 25 10:32:55.784: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 25 10:32:55.789: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:55.980: INFO: Number of nodes with available pods: 1 May 25 10:32:55.980: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:32:57.180: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:57.184: INFO: Number of nodes with available pods: 1 May 25 10:32:57.184: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:32:57.986: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:57.991: INFO: Number of nodes with available pods: 1 May 25 10:32:57.991: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:32:59.580: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:32:59.584: INFO: Number of nodes with available pods: 1 May 25 10:32:59.584: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:33:00.081: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:33:00.085: INFO: Number of nodes with available pods: 1 May 25 10:33:00.085: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:33:01.085: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:33:01.089: INFO: Number of nodes with available pods: 1 May 25 10:33:01.089: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:33:02.082: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:33:02.088: INFO: Number of nodes with available pods: 1 May 25 10:33:02.088: INFO: Node v1.21-worker is running more than one daemon pod May 25 10:33:02.987: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 10:33:02.991: INFO: Number of nodes with available pods: 2 May 25 10:33:02.991: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1525, will wait for the garbage collector to delete the pods May 25 10:33:03.069: INFO: Deleting DaemonSet.extensions daemon-set took: 5.097971ms May 25 10:33:03.270: INFO: Terminating DaemonSet.extensions daemon-set pods took: 201.097337ms May 25 10:33:15.474: INFO: Number of nodes with available pods: 0 May 25 10:33:15.474: INFO: Number of running nodes: 0, number of available pods: 0 May 25 10:33:15.477: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"512213"},"items":null} May 25 10:33:15.480: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"512213"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:33:15.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1525" for this suite. • [SLOW TEST:38.994 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":17,"completed":12,"skipped":4675,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:33:15.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 25 10:33:15.539: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 25 10:33:15.552: INFO: Waiting for terminating namespaces to be deleted... May 25 10:33:15.556: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 25 10:33:15.564: INFO: coredns-558bd4d5db-46k4j from kube-system started at 2021-05-25 02:18:50 +0000 UTC (1 container statuses recorded) May 25 10:33:15.564: INFO: Container coredns ready: true, restart count 0 May 25 10:33:15.564: INFO: coredns-558bd4d5db-kff7s from kube-system started at 2021-05-25 02:18:50 +0000 UTC (1 container statuses recorded) May 25 10:33:15.564: INFO: Container coredns ready: true, restart count 0 May 25 10:33:15.564: INFO: create-loop-devs-zpb97 from kube-system started at 2021-05-25 02:04:35 +0000 UTC (1 container statuses recorded) May 25 10:33:15.564: INFO: Container loopdev ready: true, restart count 0 May 25 10:33:15.564: INFO: kindnet-64qsq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 10:33:15.564: INFO: Container kindnet-cni ready: true, restart count 0 May 25 10:33:15.564: INFO: kube-multus-ds-fnq4h from kube-system started at 2021-05-25 02:04:15 +0000 UTC (1 container statuses recorded) May 25 10:33:15.564: INFO: Container kube-multus ready: true, restart count 0 May 25 10:33:15.564: INFO: kube-proxy-pjm2c from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 10:33:15.564: INFO: Container kube-proxy ready: true, restart count 0 May 25 10:33:15.564: INFO: tune-sysctls-4ntcs from kube-system started at 2021-05-25 02:03:55 +0000 UTC (1 container statuses recorded) May 25 10:33:15.564: INFO: Container setsysctls ready: true, restart count 0 May 25 10:33:15.564: INFO: speaker-nljg8 from metallb-system started at 2021-05-25 02:03:55 +0000 UTC (1 container statuses recorded) May 25 10:33:15.564: INFO: Container speaker ready: true, restart count 0 May 25 10:33:15.564: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 25 10:33:15.572: INFO: create-loop-devs-lfj6m from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 10:33:15.572: INFO: Container loopdev ready: true, restart count 0 May 25 10:33:15.572: INFO: kindnet-5xbgn from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 10:33:15.572: INFO: Container kindnet-cni ready: true, restart count 0 May 25 10:33:15.573: INFO: kube-multus-ds-chmxd from kube-system started at 2021-05-24 17:25:29 +0000 UTC (1 container statuses recorded) May 25 10:33:15.573: INFO: Container kube-multus ready: true, restart count 1 May 25 10:33:15.573: INFO: kube-proxy-wg4wq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 10:33:15.573: INFO: Container kube-proxy ready: true, restart count 0 May 25 10:33:15.573: INFO: tune-sysctls-b7rgm from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 10:33:15.573: INFO: Container setsysctls ready: true, restart count 0 May 25 10:33:15.573: INFO: dashboard-metrics-scraper-856586f554-l66m5 from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 10:33:15.573: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 25 10:33:15.573: INFO: kubernetes-dashboard-78c79f97b4-k777m from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 10:33:15.573: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 25 10:33:15.573: INFO: controller-675995489c-x7gj2 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 10:33:15.573: INFO: Container controller ready: true, restart count 0 May 25 10:33:15.573: INFO: speaker-lw6f6 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 10:33:15.573: INFO: Container speaker ready: true, restart count 0 May 25 10:33:15.573: INFO: contour-74948c9879-n2262 from projectcontour started at 2021-05-24 17:25:31 +0000 UTC (1 container statuses recorded) May 25 10:33:15.573: INFO: Container contour ready: true, restart count 0 May 25 10:33:15.573: INFO: contour-74948c9879-w22pr from projectcontour started at 2021-05-24 19:58:28 +0000 UTC (1 container statuses recorded) May 25 10:33:15.573: INFO: Container contour ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: verifying the node has the label node v1.21-worker STEP: verifying the node has the label node v1.21-worker2 May 25 10:33:15.626: INFO: Pod coredns-558bd4d5db-46k4j requesting resource cpu=100m on Node v1.21-worker May 25 10:33:15.626: INFO: Pod coredns-558bd4d5db-kff7s requesting resource cpu=100m on Node v1.21-worker May 25 10:33:15.626: INFO: Pod create-loop-devs-lfj6m requesting resource cpu=0m on Node v1.21-worker2 May 25 10:33:15.626: INFO: Pod create-loop-devs-zpb97 requesting resource cpu=0m on Node v1.21-worker May 25 10:33:15.626: INFO: Pod kindnet-5xbgn requesting resource cpu=100m on Node v1.21-worker2 May 25 10:33:15.626: INFO: Pod kindnet-64qsq requesting resource cpu=100m on Node v1.21-worker May 25 10:33:15.626: INFO: Pod kube-multus-ds-chmxd requesting resource cpu=100m on Node v1.21-worker2 May 25 10:33:15.626: INFO: Pod kube-multus-ds-fnq4h requesting resource cpu=100m on Node v1.21-worker May 25 10:33:15.626: INFO: Pod kube-proxy-pjm2c requesting resource cpu=0m on Node v1.21-worker May 25 10:33:15.626: INFO: Pod kube-proxy-wg4wq requesting resource cpu=0m on Node v1.21-worker2 May 25 10:33:15.626: INFO: Pod tune-sysctls-4ntcs requesting resource cpu=0m on Node v1.21-worker May 25 10:33:15.626: INFO: Pod tune-sysctls-b7rgm requesting resource cpu=0m on Node v1.21-worker2 May 25 10:33:15.626: INFO: Pod dashboard-metrics-scraper-856586f554-l66m5 requesting resource cpu=0m on Node v1.21-worker2 May 25 10:33:15.626: INFO: Pod kubernetes-dashboard-78c79f97b4-k777m requesting resource cpu=0m on Node v1.21-worker2 May 25 10:33:15.626: INFO: Pod controller-675995489c-x7gj2 requesting resource cpu=0m on Node v1.21-worker2 May 25 10:33:15.626: INFO: Pod speaker-lw6f6 requesting resource cpu=0m on Node v1.21-worker2 May 25 10:33:15.626: INFO: Pod speaker-nljg8 requesting resource cpu=0m on Node v1.21-worker May 25 10:33:15.626: INFO: Pod contour-74948c9879-n2262 requesting resource cpu=0m on Node v1.21-worker2 May 25 10:33:15.626: INFO: Pod contour-74948c9879-w22pr requesting resource cpu=0m on Node v1.21-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 25 10:33:15.626: INFO: Creating a pod which consumes cpu=61320m on Node v1.21-worker May 25 10:33:15.634: INFO: Creating a pod which consumes cpu=61460m on Node v1.21-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-269f53bb-3128-487a-9716-ff3497f56c0e.168248bdc17bdedb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4036/filler-pod-269f53bb-3128-487a-9716-ff3497f56c0e to v1.21-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-269f53bb-3128-487a-9716-ff3497f56c0e.168248bddeef540f], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.247/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-269f53bb-3128-487a-9716-ff3497f56c0e.168248bdeaba556b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-269f53bb-3128-487a-9716-ff3497f56c0e.168248bdec5e0349], Reason = [Created], Message = [Created container filler-pod-269f53bb-3128-487a-9716-ff3497f56c0e] STEP: Considering event: Type = [Normal], Name = [filler-pod-269f53bb-3128-487a-9716-ff3497f56c0e.168248bdf43b2e10], Reason = [Started], Message = [Started container filler-pod-269f53bb-3128-487a-9716-ff3497f56c0e] STEP: Considering event: Type = [Normal], Name = [filler-pod-5727ee57-3d6b-4e4c-9979-fded1d8645b5.168248bdc1b4c927], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4036/filler-pod-5727ee57-3d6b-4e4c-9979-fded1d8645b5 to v1.21-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-5727ee57-3d6b-4e4c-9979-fded1d8645b5.168248bde09e28e5], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.75/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-5727ee57-3d6b-4e4c-9979-fded1d8645b5.168248bdeb2ef989], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-5727ee57-3d6b-4e4c-9979-fded1d8645b5.168248bdec68c1b0], Reason = [Created], Message = [Created container filler-pod-5727ee57-3d6b-4e4c-9979-fded1d8645b5] STEP: Considering event: Type = [Normal], Name = [filler-pod-5727ee57-3d6b-4e4c-9979-fded1d8645b5.168248bdf435dbb7], Reason = [Started], Message = [Started container filler-pod-5727ee57-3d6b-4e4c-9979-fded1d8645b5] STEP: Considering event: Type = [Warning], Name = [additional-pod.168248be3a2ea544], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node v1.21-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node v1.21-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:33:18.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4036" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":17,"completed":13,"skipped":4697,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:33:18.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:33:18.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1489" for this suite. STEP: Destroying namespace "nspatchtest-5f9a3dc6-fef1-489c-b76c-3d785c6e5407-7242" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":17,"completed":14,"skipped":4719,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:33:18.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 25 10:33:18.847: INFO: Waiting up to 1m0s for all nodes to be ready May 25 10:34:18.893: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. May 25 10:34:18.917: INFO: Created pod: pod0-sched-preemption-low-priority May 25 10:34:18.933: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:34:39.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-6832" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:80.270 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":17,"completed":15,"skipped":4808,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:34:39.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:35:08.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4100" for this suite. STEP: Destroying namespace "nsdeletetest-9169" for this suite. May 25 10:35:08.215: INFO: Namespace nsdeletetest-9169 was already deleted STEP: Destroying namespace "nsdeletetest-5621" for this suite. • [SLOW TEST:29.142 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":17,"completed":16,"skipped":5097,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 10:35:08.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 25 10:35:08.256: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 25 10:35:08.265: INFO: Waiting for terminating namespaces to be deleted... May 25 10:35:08.269: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 25 10:35:08.277: INFO: coredns-558bd4d5db-46k4j from kube-system started at 2021-05-25 02:18:50 +0000 UTC (1 container statuses recorded) May 25 10:35:08.277: INFO: Container coredns ready: true, restart count 0 May 25 10:35:08.277: INFO: coredns-558bd4d5db-kff7s from kube-system started at 2021-05-25 02:18:50 +0000 UTC (1 container statuses recorded) May 25 10:35:08.277: INFO: Container coredns ready: true, restart count 0 May 25 10:35:08.277: INFO: create-loop-devs-zpb97 from kube-system started at 2021-05-25 02:04:35 +0000 UTC (1 container statuses recorded) May 25 10:35:08.277: INFO: Container loopdev ready: true, restart count 0 May 25 10:35:08.277: INFO: kindnet-64qsq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 10:35:08.277: INFO: Container kindnet-cni ready: true, restart count 0 May 25 10:35:08.277: INFO: kube-multus-ds-fnq4h from kube-system started at 2021-05-25 02:04:15 +0000 UTC (1 container statuses recorded) May 25 10:35:08.277: INFO: Container kube-multus ready: true, restart count 0 May 25 10:35:08.277: INFO: kube-proxy-pjm2c from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 10:35:08.278: INFO: Container kube-proxy ready: true, restart count 0 May 25 10:35:08.278: INFO: tune-sysctls-4ntcs from kube-system started at 2021-05-25 02:03:55 +0000 UTC (1 container statuses recorded) May 25 10:35:08.278: INFO: Container setsysctls ready: true, restart count 0 May 25 10:35:08.278: INFO: speaker-nljg8 from metallb-system started at 2021-05-25 02:03:55 +0000 UTC (1 container statuses recorded) May 25 10:35:08.278: INFO: Container speaker ready: true, restart count 0 May 25 10:35:08.278: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 25 10:35:08.286: INFO: create-loop-devs-lfj6m from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 10:35:08.287: INFO: Container loopdev ready: true, restart count 0 May 25 10:35:08.287: INFO: kindnet-5xbgn from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 10:35:08.287: INFO: Container kindnet-cni ready: true, restart count 0 May 25 10:35:08.287: INFO: kube-multus-ds-chmxd from kube-system started at 2021-05-24 17:25:29 +0000 UTC (1 container statuses recorded) May 25 10:35:08.287: INFO: Container kube-multus ready: true, restart count 1 May 25 10:35:08.287: INFO: kube-proxy-wg4wq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 10:35:08.287: INFO: Container kube-proxy ready: true, restart count 0 May 25 10:35:08.287: INFO: tune-sysctls-b7rgm from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 10:35:08.287: INFO: Container setsysctls ready: true, restart count 0 May 25 10:35:08.287: INFO: dashboard-metrics-scraper-856586f554-l66m5 from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 10:35:08.287: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 25 10:35:08.287: INFO: kubernetes-dashboard-78c79f97b4-k777m from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 10:35:08.287: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 25 10:35:08.287: INFO: controller-675995489c-x7gj2 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 10:35:08.287: INFO: Container controller ready: true, restart count 0 May 25 10:35:08.287: INFO: speaker-lw6f6 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 10:35:08.287: INFO: Container speaker ready: true, restart count 0 May 25 10:35:08.287: INFO: contour-74948c9879-n2262 from projectcontour started at 2021-05-24 17:25:31 +0000 UTC (1 container statuses recorded) May 25 10:35:08.287: INFO: Container contour ready: true, restart count 0 May 25 10:35:08.287: INFO: contour-74948c9879-w22pr from projectcontour started at 2021-05-24 19:58:28 +0000 UTC (1 container statuses recorded) May 25 10:35:08.287: INFO: Container contour ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.168248d7fd83f7e9], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity/selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 10:35:09.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3318" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":17,"completed":17,"skipped":5433,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 25 10:35:09.486: INFO: Running AfterSuite actions on all nodes May 25 10:35:09.486: INFO: Running AfterSuite actions on node 1 May 25 10:35:09.486: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":17,"skipped":5754,"failed":0} Ran 17 of 5771 Specs in 902.807 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5754 Skipped PASS Ginkgo ran 1 suite in 15m4.521515525s Test Suite Passed