I0520 13:03:03.745923 17 e2e.go:129] Starting e2e run "4447b723-2bf2-4d69-a579-6045eb7f6342" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1621515782 - Will randomize all specs Will run 17 of 5771 specs May 20 13:03:03.825: INFO: >>> kubeConfig: /root/.kube/config May 20 13:03:03.830: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 20 13:03:03.857: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 20 13:03:03.912: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 20 13:03:03.912: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 20 13:03:03.912: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 20 13:03:03.926: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) May 20 13:03:03.926: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 20 13:03:03.926: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) May 20 13:03:03.926: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 20 13:03:03.926: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) May 20 13:03:03.926: INFO: e2e test version: v1.21.1 May 20 13:03:03.928: INFO: kube-apiserver version: v1.21.0 May 20 13:03:03.928: INFO: >>> kubeConfig: /root/.kube/config May 20 13:03:03.935: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 13:03:03.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets W0520 13:03:03.974130 17 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 13:03:03.974: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 13:03:03.983: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 20 13:03:04.014: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:04.017: INFO: Number of nodes with available pods: 0 May 20 13:03:04.017: INFO: Node v1.21-worker is running more than one daemon pod May 20 13:03:05.023: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:05.027: INFO: Number of nodes with available pods: 0 May 20 13:03:05.027: INFO: Node v1.21-worker is running more than one daemon pod May 20 13:03:06.023: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:06.027: INFO: Number of nodes with available pods: 1 May 20 13:03:06.027: INFO: Node v1.21-worker is running more than one daemon pod May 20 13:03:07.080: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:07.084: INFO: Number of nodes with available pods: 2 May 20 13:03:07.084: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 20 13:03:07.179: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:07.183: INFO: Number of nodes with available pods: 1 May 20 13:03:07.183: INFO: Node v1.21-worker is running more than one daemon pod May 20 13:03:08.190: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:08.194: INFO: Number of nodes with available pods: 1 May 20 13:03:08.194: INFO: Node v1.21-worker is running more than one daemon pod May 20 13:03:09.188: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:09.192: INFO: Number of nodes with available pods: 2 May 20 13:03:09.192: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4457, will wait for the garbage collector to delete the pods May 20 13:03:09.257: INFO: Deleting DaemonSet.extensions daemon-set took: 5.595366ms May 20 13:03:09.358: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.191942ms May 20 13:03:23.161: INFO: Number of nodes with available pods: 0 May 20 13:03:23.161: INFO: Number of running nodes: 0, number of available pods: 0 May 20 13:03:23.168: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"866213"},"items":null} May 20 13:03:23.172: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"866213"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 13:03:23.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4457" for this suite. • [SLOW TEST:19.256 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":17,"completed":1,"skipped":20,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 13:03:23.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 13:03:23.254: INFO: Create a RollingUpdate DaemonSet May 20 13:03:23.259: INFO: Check that daemon pods launch on every node of the cluster May 20 13:03:23.262: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:23.265: INFO: Number of nodes with available pods: 0 May 20 13:03:23.265: INFO: Node v1.21-worker is running more than one daemon pod May 20 13:03:24.271: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:24.275: INFO: Number of nodes with available pods: 1 May 20 13:03:24.275: INFO: Node v1.21-worker2 is running more than one daemon pod May 20 13:03:25.271: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:25.276: INFO: Number of nodes with available pods: 2 May 20 13:03:25.276: INFO: Number of running nodes: 2, number of available pods: 2 May 20 13:03:25.276: INFO: Update the DaemonSet to trigger a rollout May 20 13:03:25.287: INFO: Updating DaemonSet daemon-set May 20 13:03:33.304: INFO: Roll back the DaemonSet before rollout is complete May 20 13:03:33.313: INFO: Updating DaemonSet daemon-set May 20 13:03:33.313: INFO: Make sure DaemonSet rollback is complete May 20 13:03:33.316: INFO: Wrong image for pod: daemon-set-2dglj. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. May 20 13:03:33.316: INFO: Pod daemon-set-2dglj is not available May 20 13:03:33.320: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:34.330: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:35.337: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:36.329: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:37.331: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:38.330: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:39.330: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:40.330: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:41.330: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:42.331: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:43.331: INFO: Pod daemon-set-bgw7n is not available May 20 13:03:43.335: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1499, will wait for the garbage collector to delete the pods May 20 13:03:43.401: INFO: Deleting DaemonSet.extensions daemon-set took: 4.92914ms May 20 13:03:43.502: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.531945ms May 20 13:03:53.205: INFO: Number of nodes with available pods: 0 May 20 13:03:53.205: INFO: Number of running nodes: 0, number of available pods: 0 May 20 13:03:53.208: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"866392"},"items":null} May 20 13:03:53.211: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"866392"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 13:03:53.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1499" for this suite. • [SLOW TEST:30.035 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":17,"completed":2,"skipped":377,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 13:03:53.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 20 13:03:53.300: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:53.303: INFO: Number of nodes with available pods: 0 May 20 13:03:53.303: INFO: Node v1.21-worker is running more than one daemon pod May 20 13:03:54.309: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:54.314: INFO: Number of nodes with available pods: 0 May 20 13:03:54.314: INFO: Node v1.21-worker is running more than one daemon pod May 20 13:03:55.309: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:55.313: INFO: Number of nodes with available pods: 2 May 20 13:03:55.313: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 20 13:03:55.329: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:55.333: INFO: Number of nodes with available pods: 1 May 20 13:03:55.333: INFO: Node v1.21-worker is running more than one daemon pod May 20 13:03:56.338: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:56.341: INFO: Number of nodes with available pods: 1 May 20 13:03:56.341: INFO: Node v1.21-worker is running more than one daemon pod May 20 13:03:57.338: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:57.344: INFO: Number of nodes with available pods: 1 May 20 13:03:57.344: INFO: Node v1.21-worker is running more than one daemon pod May 20 13:03:58.338: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:58.341: INFO: Number of nodes with available pods: 1 May 20 13:03:58.341: INFO: Node v1.21-worker is running more than one daemon pod May 20 13:03:59.339: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:03:59.343: INFO: Number of nodes with available pods: 1 May 20 13:03:59.343: INFO: Node v1.21-worker is running more than one daemon pod May 20 13:04:00.339: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:04:00.343: INFO: Number of nodes with available pods: 1 May 20 13:04:00.343: INFO: Node v1.21-worker is running more than one daemon pod May 20 13:04:01.339: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:04:01.342: INFO: Number of nodes with available pods: 1 May 20 13:04:01.343: INFO: Node v1.21-worker is running more than one daemon pod May 20 13:04:02.339: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:04:02.343: INFO: Number of nodes with available pods: 1 May 20 13:04:02.343: INFO: Node v1.21-worker is running more than one daemon pod May 20 13:04:03.387: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:04:03.485: INFO: Number of nodes with available pods: 1 May 20 13:04:03.485: INFO: Node v1.21-worker is running more than one daemon pod May 20 13:04:04.339: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:04:04.343: INFO: Number of nodes with available pods: 1 May 20 13:04:04.343: INFO: Node v1.21-worker is running more than one daemon pod May 20 13:04:05.340: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:04:05.344: INFO: Number of nodes with available pods: 1 May 20 13:04:05.344: INFO: Node v1.21-worker is running more than one daemon pod May 20 13:04:06.339: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:04:06.343: INFO: Number of nodes with available pods: 2 May 20 13:04:06.343: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7412, will wait for the garbage collector to delete the pods May 20 13:04:06.405: INFO: Deleting DaemonSet.extensions daemon-set took: 5.3266ms May 20 13:04:06.506: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.972588ms May 20 13:04:13.311: INFO: Number of nodes with available pods: 0 May 20 13:04:13.311: INFO: Number of running nodes: 0, number of available pods: 0 May 20 13:04:13.314: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"866532"},"items":null} May 20 13:04:13.317: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"866532"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 13:04:13.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7412" for this suite. • [SLOW TEST:20.095 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":17,"completed":3,"skipped":885,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 13:04:13.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 20 13:04:13.386: INFO: Waiting up to 1m0s for all nodes to be ready May 20 13:05:13.431: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 13:05:13.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 20 13:05:15.622: INFO: found a healthy node: v1.21-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 13:05:27.692: INFO: pods created so far: [1 1 1] May 20 13:05:27.692: INFO: length of pods created so far: 3 May 20 13:05:31.702: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 13:05:38.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-1160" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 13:05:38.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-105" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:85.443 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":17,"completed":4,"skipped":946,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 13:05:38.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 20 13:05:38.833: INFO: Waiting up to 1m0s for all nodes to be ready May 20 13:06:38.877: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. May 20 13:06:38.985: INFO: Created pod: pod0-sched-preemption-low-priority May 20 13:06:39.278: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 13:06:55.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-6860" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:76.588 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":17,"completed":5,"skipped":1326,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 13:06:55.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 13:06:55.440: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 20 13:06:55.448: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:06:55.450: INFO: Number of nodes with available pods: 0 May 20 13:06:55.450: INFO: Node v1.21-worker is running more than one daemon pod May 20 13:06:56.457: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:06:56.460: INFO: Number of nodes with available pods: 0 May 20 13:06:56.460: INFO: Node v1.21-worker is running more than one daemon pod May 20 13:06:57.455: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:06:57.459: INFO: Number of nodes with available pods: 2 May 20 13:06:57.459: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 20 13:06:57.487: INFO: Wrong image for pod: daemon-set-2cxp6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 20 13:06:57.487: INFO: Wrong image for pod: daemon-set-kx555. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 20 13:06:57.491: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:06:58.495: INFO: Wrong image for pod: daemon-set-2cxp6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 20 13:06:58.500: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:06:59.496: INFO: Wrong image for pod: daemon-set-2cxp6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 20 13:06:59.500: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:07:00.496: INFO: Wrong image for pod: daemon-set-2cxp6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 20 13:07:00.501: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:07:01.579: INFO: Wrong image for pod: daemon-set-2cxp6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 20 13:07:01.583: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:07:02.496: INFO: Wrong image for pod: daemon-set-2cxp6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 20 13:07:02.500: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:07:03.496: INFO: Wrong image for pod: daemon-set-2cxp6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 20 13:07:03.496: INFO: Pod daemon-set-kmmrh is not available May 20 13:07:03.500: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:07:04.496: INFO: Wrong image for pod: daemon-set-2cxp6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. May 20 13:07:04.496: INFO: Pod daemon-set-kmmrh is not available May 20 13:07:04.500: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:07:05.501: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:07:06.501: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:07:07.500: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:07:08.500: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:07:09.501: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:07:10.501: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:07:11.500: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:07:12.501: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:07:13.497: INFO: Pod daemon-set-rfg26 is not available May 20 13:07:13.501: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 20 13:07:13.505: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:07:13.509: INFO: Number of nodes with available pods: 1 May 20 13:07:13.509: INFO: Node v1.21-worker2 is running more than one daemon pod May 20 13:07:14.515: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:07:14.519: INFO: Number of nodes with available pods: 1 May 20 13:07:14.519: INFO: Node v1.21-worker2 is running more than one daemon pod May 20 13:07:15.515: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 13:07:15.519: INFO: Number of nodes with available pods: 2 May 20 13:07:15.519: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-298, will wait for the garbage collector to delete the pods May 20 13:07:15.596: INFO: Deleting DaemonSet.extensions daemon-set took: 6.021679ms May 20 13:07:15.696: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.992586ms May 20 13:07:23.400: INFO: Number of nodes with available pods: 0 May 20 13:07:23.400: INFO: Number of running nodes: 0, number of available pods: 0 May 20 13:07:23.403: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"867395"},"items":null} May 20 13:07:23.406: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"867395"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 13:07:23.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-298" for this suite. • [SLOW TEST:28.046 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":17,"completed":6,"skipped":1596,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 13:07:23.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 13:07:23.465: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 13:07:23.472: INFO: Waiting for terminating namespaces to be deleted... May 20 13:07:23.476: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 20 13:07:23.484: INFO: create-loop-devs-965k2 from kube-system started at 2021-05-16 10:45:24 +0000 UTC (1 container statuses recorded) May 20 13:07:23.484: INFO: Container loopdev ready: true, restart count 0 May 20 13:07:23.484: INFO: kindnet-2qtxh from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 13:07:23.484: INFO: Container kindnet-cni ready: true, restart count 1 May 20 13:07:23.484: INFO: kube-multus-ds-xst78 from kube-system started at 2021-05-16 10:45:26 +0000 UTC (1 container statuses recorded) May 20 13:07:23.484: INFO: Container kube-multus ready: true, restart count 0 May 20 13:07:23.484: INFO: kube-proxy-42vmb from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 13:07:23.484: INFO: Container kube-proxy ready: true, restart count 0 May 20 13:07:23.484: INFO: tune-sysctls-jcgnq from kube-system started at 2021-05-16 10:45:25 +0000 UTC (1 container statuses recorded) May 20 13:07:23.484: INFO: Container setsysctls ready: true, restart count 0 May 20 13:07:23.484: INFO: dashboard-metrics-scraper-856586f554-75x2x from kubernetes-dashboard started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 13:07:23.484: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 20 13:07:23.484: INFO: speaker-g5b8b from metallb-system started at 2021-05-16 10:45:27 +0000 UTC (1 container statuses recorded) May 20 13:07:23.484: INFO: Container speaker ready: true, restart count 0 May 20 13:07:23.484: INFO: contour-74948c9879-8866g from projectcontour started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 13:07:23.484: INFO: Container contour ready: true, restart count 0 May 20 13:07:23.484: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 20 13:07:23.493: INFO: create-loop-devs-vqtfp from kube-system started at 2021-05-16 10:45:24 +0000 UTC (1 container statuses recorded) May 20 13:07:23.493: INFO: Container loopdev ready: true, restart count 0 May 20 13:07:23.493: INFO: kindnet-xkwvl from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 13:07:23.493: INFO: Container kindnet-cni ready: true, restart count 1 May 20 13:07:23.493: INFO: kube-multus-ds-64skz from kube-system started at 2021-05-16 10:45:26 +0000 UTC (1 container statuses recorded) May 20 13:07:23.493: INFO: Container kube-multus ready: true, restart count 3 May 20 13:07:23.493: INFO: kube-proxy-gh4rd from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 13:07:23.493: INFO: Container kube-proxy ready: true, restart count 0 May 20 13:07:23.493: INFO: tune-sysctls-wtxr5 from kube-system started at 2021-05-16 10:45:25 +0000 UTC (1 container statuses recorded) May 20 13:07:23.493: INFO: Container setsysctls ready: true, restart count 0 May 20 13:07:23.493: INFO: kubernetes-dashboard-78c79f97b4-fp9g9 from kubernetes-dashboard started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 13:07:23.493: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 20 13:07:23.493: INFO: controller-675995489c-vhbd2 from metallb-system started at 2021-05-16 10:45:27 +0000 UTC (1 container statuses recorded) May 20 13:07:23.493: INFO: Container controller ready: true, restart count 0 May 20 13:07:23.493: INFO: speaker-n5qnt from metallb-system started at 2021-05-16 10:45:27 +0000 UTC (1 container statuses recorded) May 20 13:07:23.493: INFO: Container speaker ready: true, restart count 0 May 20 13:07:23.493: INFO: contour-74948c9879-97hs9 from projectcontour started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 13:07:23.493: INFO: Container contour ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: verifying the node has the label node v1.21-worker STEP: verifying the node has the label node v1.21-worker2 May 20 13:07:23.548: INFO: Pod create-loop-devs-965k2 requesting resource cpu=0m on Node v1.21-worker May 20 13:07:23.548: INFO: Pod create-loop-devs-vqtfp requesting resource cpu=0m on Node v1.21-worker2 May 20 13:07:23.548: INFO: Pod kindnet-2qtxh requesting resource cpu=100m on Node v1.21-worker May 20 13:07:23.548: INFO: Pod kindnet-xkwvl requesting resource cpu=100m on Node v1.21-worker2 May 20 13:07:23.548: INFO: Pod kube-multus-ds-64skz requesting resource cpu=100m on Node v1.21-worker2 May 20 13:07:23.548: INFO: Pod kube-multus-ds-xst78 requesting resource cpu=100m on Node v1.21-worker May 20 13:07:23.548: INFO: Pod kube-proxy-42vmb requesting resource cpu=0m on Node v1.21-worker May 20 13:07:23.548: INFO: Pod kube-proxy-gh4rd requesting resource cpu=0m on Node v1.21-worker2 May 20 13:07:23.548: INFO: Pod tune-sysctls-jcgnq requesting resource cpu=0m on Node v1.21-worker May 20 13:07:23.548: INFO: Pod tune-sysctls-wtxr5 requesting resource cpu=0m on Node v1.21-worker2 May 20 13:07:23.548: INFO: Pod dashboard-metrics-scraper-856586f554-75x2x requesting resource cpu=0m on Node v1.21-worker May 20 13:07:23.548: INFO: Pod kubernetes-dashboard-78c79f97b4-fp9g9 requesting resource cpu=0m on Node v1.21-worker2 May 20 13:07:23.548: INFO: Pod controller-675995489c-vhbd2 requesting resource cpu=0m on Node v1.21-worker2 May 20 13:07:23.548: INFO: Pod speaker-g5b8b requesting resource cpu=0m on Node v1.21-worker May 20 13:07:23.548: INFO: Pod speaker-n5qnt requesting resource cpu=0m on Node v1.21-worker2 May 20 13:07:23.548: INFO: Pod contour-74948c9879-8866g requesting resource cpu=0m on Node v1.21-worker May 20 13:07:23.548: INFO: Pod contour-74948c9879-97hs9 requesting resource cpu=0m on Node v1.21-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 20 13:07:23.548: INFO: Creating a pod which consumes cpu=61460m on Node v1.21-worker May 20 13:07:23.554: INFO: Creating a pod which consumes cpu=61460m on Node v1.21-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-11510bd5-84b6-4c12-a46b-a9c0c7ba73a2.1680c8401dfdc319], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5492/filler-pod-11510bd5-84b6-4c12-a46b-a9c0c7ba73a2 to v1.21-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-11510bd5-84b6-4c12-a46b-a9c0c7ba73a2.1680c8403ed0d495], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.113/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-11510bd5-84b6-4c12-a46b-a9c0c7ba73a2.1680c8404ba6928b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-11510bd5-84b6-4c12-a46b-a9c0c7ba73a2.1680c8404e6dbb91], Reason = [Created], Message = [Created container filler-pod-11510bd5-84b6-4c12-a46b-a9c0c7ba73a2] STEP: Considering event: Type = [Normal], Name = [filler-pod-11510bd5-84b6-4c12-a46b-a9c0c7ba73a2.1680c840575a1195], Reason = [Started], Message = [Started container filler-pod-11510bd5-84b6-4c12-a46b-a9c0c7ba73a2] STEP: Considering event: Type = [Normal], Name = [filler-pod-56212492-2d58-4955-99bb-e800db0566be.1680c8401e35e3cc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5492/filler-pod-56212492-2d58-4955-99bb-e800db0566be to v1.21-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-56212492-2d58-4955-99bb-e800db0566be.1680c8403f01e9cf], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.98/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-56212492-2d58-4955-99bb-e800db0566be.1680c8404b7e399d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-56212492-2d58-4955-99bb-e800db0566be.1680c8404e72d280], Reason = [Created], Message = [Created container filler-pod-56212492-2d58-4955-99bb-e800db0566be] STEP: Considering event: Type = [Normal], Name = [filler-pod-56212492-2d58-4955-99bb-e800db0566be.1680c8405741682b], Reason = [Started], Message = [Started container filler-pod-56212492-2d58-4955-99bb-e800db0566be] STEP: Considering event: Type = [Warning], Name = [additional-pod.1680c8409695b600], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node v1.21-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node v1.21-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 13:07:26.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5492" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":17,"completed":7,"skipped":1597,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 13:07:26.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 13:07:57.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2520" for this suite. STEP: Destroying namespace "nsdeletetest-6563" for this suite. May 20 13:07:58.881: INFO: Namespace nsdeletetest-6563 was already deleted STEP: Destroying namespace "nsdeletetest-2553" for this suite. • [SLOW TEST:32.100 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":17,"completed":8,"skipped":1605,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 13:07:58.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 20 13:08:06.379: INFO: Pod name wrapped-volume-race-a9a94042-bf73-4dcd-b0bb-be5be527c682: Found 0 pods out of 5 May 20 13:08:11.387: INFO: Pod name wrapped-volume-race-a9a94042-bf73-4dcd-b0bb-be5be527c682: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a9a94042-bf73-4dcd-b0bb-be5be527c682 in namespace emptydir-wrapper-1720, will wait for the garbage collector to delete the pods May 20 13:08:21.480: INFO: Deleting ReplicationController wrapped-volume-race-a9a94042-bf73-4dcd-b0bb-be5be527c682 took: 5.888683ms May 20 13:08:21.581: INFO: Terminating ReplicationController wrapped-volume-race-a9a94042-bf73-4dcd-b0bb-be5be527c682 pods took: 100.998839ms STEP: Creating RC which spawns configmap-volume pods May 20 13:08:33.303: INFO: Pod name wrapped-volume-race-ef3b19ca-ee02-4bba-aa46-dec7cff859b2: Found 0 pods out of 5 May 20 13:08:38.312: INFO: Pod name wrapped-volume-race-ef3b19ca-ee02-4bba-aa46-dec7cff859b2: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ef3b19ca-ee02-4bba-aa46-dec7cff859b2 in namespace emptydir-wrapper-1720, will wait for the garbage collector to delete the pods May 20 13:08:50.451: INFO: Deleting ReplicationController wrapped-volume-race-ef3b19ca-ee02-4bba-aa46-dec7cff859b2 took: 6.013448ms May 20 13:08:50.552: INFO: Terminating ReplicationController wrapped-volume-race-ef3b19ca-ee02-4bba-aa46-dec7cff859b2 pods took: 101.01254ms STEP: Creating RC which spawns configmap-volume pods May 20 13:09:03.273: INFO: Pod name wrapped-volume-race-085f4634-b4b6-4fda-8c0d-0e5a2c23aa97: Found 0 pods out of 5 May 20 13:09:08.282: INFO: Pod name wrapped-volume-race-085f4634-b4b6-4fda-8c0d-0e5a2c23aa97: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-085f4634-b4b6-4fda-8c0d-0e5a2c23aa97 in namespace emptydir-wrapper-1720, will wait for the garbage collector to delete the pods May 20 13:09:18.375: INFO: Deleting ReplicationController wrapped-volume-race-085f4634-b4b6-4fda-8c0d-0e5a2c23aa97 took: 6.236458ms May 20 13:09:18.476: INFO: Terminating ReplicationController wrapped-volume-race-085f4634-b4b6-4fda-8c0d-0e5a2c23aa97 pods took: 100.892417ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 13:09:23.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1720" for this suite. • [SLOW TEST:84.711 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":17,"completed":9,"skipped":2267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 13:09:23.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 20 13:09:23.664: INFO: Waiting up to 1m0s for all nodes to be ready May 20 13:10:23.708: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 13:10:23.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 13:10:23.813: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. May 20 13:10:23.817: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 13:10:23.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-4582" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 13:10:23.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1763" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.279 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":17,"completed":10,"skipped":2658,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 13:10:23.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 13:10:23.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9918" for this suite. STEP: Destroying namespace "nspatchtest-8f57a237-14ff-4ea7-bbc8-7e9900c11bcd-3861" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":17,"completed":11,"skipped":2947,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 13:10:23.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 13:10:30.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1985" for this suite. STEP: Destroying namespace "nsdeletetest-9264" for this suite. May 20 13:10:30.095: INFO: Namespace nsdeletetest-9264 was already deleted STEP: Destroying namespace "nsdeletetest-1764" for this suite. • [SLOW TEST:6.121 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":17,"completed":12,"skipped":3268,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 13:10:30.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 13:10:30.128: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 13:10:30.140: INFO: Waiting for terminating namespaces to be deleted... May 20 13:10:30.145: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 20 13:10:30.154: INFO: create-loop-devs-965k2 from kube-system started at 2021-05-16 10:45:24 +0000 UTC (1 container statuses recorded) May 20 13:10:30.154: INFO: Container loopdev ready: true, restart count 0 May 20 13:10:30.154: INFO: kindnet-2qtxh from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 13:10:30.154: INFO: Container kindnet-cni ready: true, restart count 1 May 20 13:10:30.154: INFO: kube-multus-ds-xst78 from kube-system started at 2021-05-16 10:45:26 +0000 UTC (1 container statuses recorded) May 20 13:10:30.154: INFO: Container kube-multus ready: true, restart count 0 May 20 13:10:30.154: INFO: kube-proxy-42vmb from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 13:10:30.154: INFO: Container kube-proxy ready: true, restart count 0 May 20 13:10:30.154: INFO: tune-sysctls-jcgnq from kube-system started at 2021-05-16 10:45:25 +0000 UTC (1 container statuses recorded) May 20 13:10:30.154: INFO: Container setsysctls ready: true, restart count 0 May 20 13:10:30.154: INFO: dashboard-metrics-scraper-856586f554-75x2x from kubernetes-dashboard started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 13:10:30.154: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 20 13:10:30.154: INFO: speaker-g5b8b from metallb-system started at 2021-05-16 10:45:27 +0000 UTC (1 container statuses recorded) May 20 13:10:30.154: INFO: Container speaker ready: true, restart count 0 May 20 13:10:30.154: INFO: contour-74948c9879-8866g from projectcontour started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 13:10:30.154: INFO: Container contour ready: true, restart count 0 May 20 13:10:30.154: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 20 13:10:30.163: INFO: create-loop-devs-vqtfp from kube-system started at 2021-05-16 10:45:24 +0000 UTC (1 container statuses recorded) May 20 13:10:30.163: INFO: Container loopdev ready: true, restart count 0 May 20 13:10:30.163: INFO: kindnet-xkwvl from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 13:10:30.163: INFO: Container kindnet-cni ready: true, restart count 1 May 20 13:10:30.163: INFO: kube-multus-ds-64skz from kube-system started at 2021-05-16 10:45:26 +0000 UTC (1 container statuses recorded) May 20 13:10:30.163: INFO: Container kube-multus ready: true, restart count 3 May 20 13:10:30.163: INFO: kube-proxy-gh4rd from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 13:10:30.163: INFO: Container kube-proxy ready: true, restart count 0 May 20 13:10:30.163: INFO: tune-sysctls-wtxr5 from kube-system started at 2021-05-16 10:45:25 +0000 UTC (1 container statuses recorded) May 20 13:10:30.163: INFO: Container setsysctls ready: true, restart count 0 May 20 13:10:30.163: INFO: kubernetes-dashboard-78c79f97b4-fp9g9 from kubernetes-dashboard started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 13:10:30.163: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 20 13:10:30.163: INFO: controller-675995489c-vhbd2 from metallb-system started at 2021-05-16 10:45:27 +0000 UTC (1 container statuses recorded) May 20 13:10:30.163: INFO: Container controller ready: true, restart count 0 May 20 13:10:30.163: INFO: speaker-n5qnt from metallb-system started at 2021-05-16 10:45:27 +0000 UTC (1 container statuses recorded) May 20 13:10:30.163: INFO: Container speaker ready: true, restart count 0 May 20 13:10:30.163: INFO: contour-74948c9879-97hs9 from projectcontour started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 13:10:30.163: INFO: Container contour ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-91d1b574-ab7c-4858-a761-a0538ca790c8 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-91d1b574-ab7c-4858-a761-a0538ca790c8 off the node v1.21-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-91d1b574-ab7c-4858-a761-a0538ca790c8 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 13:10:34.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4266" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":17,"completed":13,"skipped":3301,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 13:10:34.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 20 13:10:34.300: INFO: Waiting up to 1m0s for all nodes to be ready May 20 13:11:34.346: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. May 20 13:11:34.369: INFO: Created pod: pod0-sched-preemption-low-priority May 20 13:11:34.388: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 13:11:50.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3271" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:76.261 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":17,"completed":14,"skipped":3809,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 13:11:50.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 20 13:11:50.571: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 20 13:11:50.579: INFO: Number of nodes with available pods: 0 May 20 13:11:50.579: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 20 13:11:50.596: INFO: Number of nodes with available pods: 0 May 20 13:11:50.596: INFO: Node v1.21-worker2 is running more than one daemon pod May 20 13:11:51.681: INFO: Number of nodes with available pods: 0 May 20 13:11:51.681: INFO: Node v1.21-worker2 is running more than one daemon pod May 20 13:11:52.600: INFO: Number of nodes with available pods: 0 May 20 13:11:52.600: INFO: Node v1.21-worker2 is running more than one daemon pod May 20 13:11:53.601: INFO: Number of nodes with available pods: 1 May 20 13:11:53.601: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 20 13:11:53.880: INFO: Number of nodes with available pods: 1 May 20 13:11:53.880: INFO: Number of running nodes: 0, number of available pods: 1 May 20 13:11:54.885: INFO: Number of nodes with available pods: 0 May 20 13:11:54.885: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 20 13:11:55.079: INFO: Number of nodes with available pods: 0 May 20 13:11:55.079: INFO: Node v1.21-worker2 is running more than one daemon pod May 20 13:11:56.084: INFO: Number of nodes with available pods: 0 May 20 13:11:56.084: INFO: Node v1.21-worker2 is running more than one daemon pod May 20 13:11:57.083: INFO: Number of nodes with available pods: 0 May 20 13:11:57.083: INFO: Node v1.21-worker2 is running more than one daemon pod May 20 13:11:58.085: INFO: Number of nodes with available pods: 0 May 20 13:11:58.085: INFO: Node v1.21-worker2 is running more than one daemon pod May 20 13:11:59.084: INFO: Number of nodes with available pods: 0 May 20 13:11:59.084: INFO: Node v1.21-worker2 is running more than one daemon pod May 20 13:12:00.084: INFO: Number of nodes with available pods: 0 May 20 13:12:00.084: INFO: Node v1.21-worker2 is running more than one daemon pod May 20 13:12:01.085: INFO: Number of nodes with available pods: 0 May 20 13:12:01.085: INFO: Node v1.21-worker2 is running more than one daemon pod May 20 13:12:02.084: INFO: Number of nodes with available pods: 0 May 20 13:12:02.084: INFO: Node v1.21-worker2 is running more than one daemon pod May 20 13:12:03.084: INFO: Number of nodes with available pods: 0 May 20 13:12:03.084: INFO: Node v1.21-worker2 is running more than one daemon pod May 20 13:12:04.085: INFO: Number of nodes with available pods: 0 May 20 13:12:04.085: INFO: Node v1.21-worker2 is running more than one daemon pod May 20 13:12:05.084: INFO: Number of nodes with available pods: 1 May 20 13:12:05.084: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7142, will wait for the garbage collector to delete the pods May 20 13:12:05.150: INFO: Deleting DaemonSet.extensions daemon-set took: 5.743074ms May 20 13:12:05.251: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.125063ms May 20 13:12:13.355: INFO: Number of nodes with available pods: 0 May 20 13:12:13.355: INFO: Number of running nodes: 0, number of available pods: 0 May 20 13:12:13.359: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"869412"},"items":null} May 20 13:12:13.362: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"869412"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 13:12:13.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7142" for this suite. • [SLOW TEST:22.876 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":17,"completed":15,"skipped":4072,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 13:12:13.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 13:12:13.432: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 13:12:13.440: INFO: Waiting for terminating namespaces to be deleted... May 20 13:12:13.443: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 20 13:12:13.452: INFO: create-loop-devs-965k2 from kube-system started at 2021-05-16 10:45:24 +0000 UTC (1 container statuses recorded) May 20 13:12:13.453: INFO: Container loopdev ready: true, restart count 0 May 20 13:12:13.453: INFO: kindnet-2qtxh from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 13:12:13.453: INFO: Container kindnet-cni ready: true, restart count 1 May 20 13:12:13.453: INFO: kube-multus-ds-xst78 from kube-system started at 2021-05-16 10:45:26 +0000 UTC (1 container statuses recorded) May 20 13:12:13.453: INFO: Container kube-multus ready: true, restart count 0 May 20 13:12:13.453: INFO: kube-proxy-42vmb from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 13:12:13.453: INFO: Container kube-proxy ready: true, restart count 0 May 20 13:12:13.453: INFO: tune-sysctls-jcgnq from kube-system started at 2021-05-16 10:45:25 +0000 UTC (1 container statuses recorded) May 20 13:12:13.453: INFO: Container setsysctls ready: true, restart count 0 May 20 13:12:13.453: INFO: dashboard-metrics-scraper-856586f554-75x2x from kubernetes-dashboard started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 13:12:13.453: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 20 13:12:13.453: INFO: speaker-g5b8b from metallb-system started at 2021-05-16 10:45:27 +0000 UTC (1 container statuses recorded) May 20 13:12:13.453: INFO: Container speaker ready: true, restart count 0 May 20 13:12:13.453: INFO: contour-74948c9879-8866g from projectcontour started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 13:12:13.453: INFO: Container contour ready: true, restart count 0 May 20 13:12:13.453: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 20 13:12:13.461: INFO: create-loop-devs-vqtfp from kube-system started at 2021-05-16 10:45:24 +0000 UTC (1 container statuses recorded) May 20 13:12:13.461: INFO: Container loopdev ready: true, restart count 0 May 20 13:12:13.461: INFO: kindnet-xkwvl from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 13:12:13.461: INFO: Container kindnet-cni ready: true, restart count 1 May 20 13:12:13.461: INFO: kube-multus-ds-64skz from kube-system started at 2021-05-16 10:45:26 +0000 UTC (1 container statuses recorded) May 20 13:12:13.461: INFO: Container kube-multus ready: true, restart count 3 May 20 13:12:13.461: INFO: kube-proxy-gh4rd from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 13:12:13.461: INFO: Container kube-proxy ready: true, restart count 0 May 20 13:12:13.461: INFO: tune-sysctls-wtxr5 from kube-system started at 2021-05-16 10:45:25 +0000 UTC (1 container statuses recorded) May 20 13:12:13.461: INFO: Container setsysctls ready: true, restart count 0 May 20 13:12:13.461: INFO: kubernetes-dashboard-78c79f97b4-fp9g9 from kubernetes-dashboard started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 13:12:13.461: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 20 13:12:13.461: INFO: controller-675995489c-vhbd2 from metallb-system started at 2021-05-16 10:45:27 +0000 UTC (1 container statuses recorded) May 20 13:12:13.461: INFO: Container controller ready: true, restart count 0 May 20 13:12:13.461: INFO: speaker-n5qnt from metallb-system started at 2021-05-16 10:45:27 +0000 UTC (1 container statuses recorded) May 20 13:12:13.461: INFO: Container speaker ready: true, restart count 0 May 20 13:12:13.461: INFO: contour-74948c9879-97hs9 from projectcontour started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 13:12:13.461: INFO: Container contour ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-63b3c91f-baf8-4d84-979c-acaa2eed615c 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.18.0.4 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-63b3c91f-baf8-4d84-979c-acaa2eed615c off the node v1.21-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-63b3c91f-baf8-4d84-979c-acaa2eed615c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 13:17:17.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8555" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:304.170 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":17,"completed":16,"skipped":4342,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 13:17:17.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 13:17:17.610: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 13:17:17.618: INFO: Waiting for terminating namespaces to be deleted... May 20 13:17:17.621: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 20 13:17:17.630: INFO: create-loop-devs-965k2 from kube-system started at 2021-05-16 10:45:24 +0000 UTC (1 container statuses recorded) May 20 13:17:17.630: INFO: Container loopdev ready: true, restart count 0 May 20 13:17:17.630: INFO: kindnet-2qtxh from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 13:17:17.630: INFO: Container kindnet-cni ready: true, restart count 1 May 20 13:17:17.630: INFO: kube-multus-ds-xst78 from kube-system started at 2021-05-16 10:45:26 +0000 UTC (1 container statuses recorded) May 20 13:17:17.630: INFO: Container kube-multus ready: true, restart count 0 May 20 13:17:17.630: INFO: kube-proxy-42vmb from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 13:17:17.630: INFO: Container kube-proxy ready: true, restart count 0 May 20 13:17:17.630: INFO: tune-sysctls-jcgnq from kube-system started at 2021-05-16 10:45:25 +0000 UTC (1 container statuses recorded) May 20 13:17:17.630: INFO: Container setsysctls ready: true, restart count 0 May 20 13:17:17.630: INFO: dashboard-metrics-scraper-856586f554-75x2x from kubernetes-dashboard started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 13:17:17.630: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 20 13:17:17.630: INFO: speaker-g5b8b from metallb-system started at 2021-05-16 10:45:27 +0000 UTC (1 container statuses recorded) May 20 13:17:17.630: INFO: Container speaker ready: true, restart count 0 May 20 13:17:17.630: INFO: contour-74948c9879-8866g from projectcontour started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 13:17:17.630: INFO: Container contour ready: true, restart count 0 May 20 13:17:17.630: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 20 13:17:17.639: INFO: create-loop-devs-vqtfp from kube-system started at 2021-05-16 10:45:24 +0000 UTC (1 container statuses recorded) May 20 13:17:17.639: INFO: Container loopdev ready: true, restart count 0 May 20 13:17:17.639: INFO: kindnet-xkwvl from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 13:17:17.639: INFO: Container kindnet-cni ready: true, restart count 1 May 20 13:17:17.639: INFO: kube-multus-ds-64skz from kube-system started at 2021-05-16 10:45:26 +0000 UTC (1 container statuses recorded) May 20 13:17:17.639: INFO: Container kube-multus ready: true, restart count 3 May 20 13:17:17.639: INFO: kube-proxy-gh4rd from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 13:17:17.639: INFO: Container kube-proxy ready: true, restart count 0 May 20 13:17:17.639: INFO: tune-sysctls-wtxr5 from kube-system started at 2021-05-16 10:45:25 +0000 UTC (1 container statuses recorded) May 20 13:17:17.639: INFO: Container setsysctls ready: true, restart count 0 May 20 13:17:17.639: INFO: kubernetes-dashboard-78c79f97b4-fp9g9 from kubernetes-dashboard started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 13:17:17.639: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 20 13:17:17.639: INFO: controller-675995489c-vhbd2 from metallb-system started at 2021-05-16 10:45:27 +0000 UTC (1 container statuses recorded) May 20 13:17:17.639: INFO: Container controller ready: true, restart count 0 May 20 13:17:17.639: INFO: speaker-n5qnt from metallb-system started at 2021-05-16 10:45:27 +0000 UTC (1 container statuses recorded) May 20 13:17:17.639: INFO: Container speaker ready: true, restart count 0 May 20 13:17:17.639: INFO: contour-74948c9879-97hs9 from projectcontour started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 13:17:17.639: INFO: Container contour ready: true, restart count 0 May 20 13:17:17.639: INFO: pod4 from sched-pred-8555 started at 2021-05-20 13:12:15 +0000 UTC (1 container statuses recorded) May 20 13:17:17.639: INFO: Container agnhost ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1680c8cbdac9a66b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity/selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 13:17:24.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9892" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.166 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":17,"completed":17,"skipped":5683,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 20 13:17:24.749: INFO: Running AfterSuite actions on all nodes May 20 13:17:24.749: INFO: Running AfterSuite actions on node 1 May 20 13:17:24.749: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":17,"skipped":5754,"failed":0} Ran 17 of 5771 Specs in 860.929 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5754 Skipped PASS Ginkgo ran 1 suite in 14m22.634502756s Test Suite Passed