I0524 19:10:00.171846 17 test_context.go:457] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0524 19:10:00.172019 17 e2e.go:129] Starting e2e run "359b329c-7de4-4c91-86b2-03fdb0f3c875" on Ginkgo node 1 {"msg":"Test Suite starting","total":18,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1621883398 - Will randomize all specs Will run 18 of 5667 specs May 24 19:10:00.263: INFO: >>> kubeConfig: /root/.kube/config May 24 19:10:00.267: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 24 19:10:00.294: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 24 19:10:00.338: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 24 19:10:00.338: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 24 19:10:00.338: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 24 19:10:00.350: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) May 24 19:10:00.350: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 24 19:10:00.350: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) May 24 19:10:00.350: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 24 19:10:00.350: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) May 24 19:10:00.350: INFO: e2e test version: v1.20.6 May 24 19:10:00.352: INFO: kube-apiserver version: v1.20.7 May 24 19:10:00.352: INFO: >>> kubeConfig: /root/.kube/config May 24 19:10:00.358: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:10:00.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred May 24 19:10:00.408: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 19:10:00.417: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 May 24 19:10:00.421: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 19:10:00.429: INFO: Waiting for terminating namespaces to be deleted... May 24 19:10:00.433: INFO: Logging pods the apiserver thinks is on node leguer-worker before test May 24 19:10:00.441: INFO: coredns-74ff55c5b-hpl9v from kube-system started at 2021-05-22 15:45:42 +0000 UTC (1 container statuses recorded) May 24 19:10:00.442: INFO: Container coredns ready: true, restart count 0 May 24 19:10:00.442: INFO: coredns-74ff55c5b-wk7kb from kube-system started at 2021-05-22 15:45:42 +0000 UTC (1 container statuses recorded) May 24 19:10:00.442: INFO: Container coredns ready: true, restart count 0 May 24 19:10:00.442: INFO: create-loop-devs-9cxgz from kube-system started at 2021-05-22 15:27:08 +0000 UTC (1 container statuses recorded) May 24 19:10:00.442: INFO: Container loopdev ready: true, restart count 0 May 24 19:10:00.442: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 19:10:00.442: INFO: Container kindnet-cni ready: true, restart count 13 May 24 19:10:00.442: INFO: kube-multus-ds-ptbx9 from kube-system started at 2021-05-22 15:26:47 +0000 UTC (1 container statuses recorded) May 24 19:10:00.442: INFO: Container kube-multus ready: true, restart count 0 May 24 19:10:00.442: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 19:10:00.442: INFO: Container kube-proxy ready: true, restart count 0 May 24 19:10:00.442: INFO: tune-sysctls-twdb5 from kube-system started at 2021-05-22 15:26:37 +0000 UTC (1 container statuses recorded) May 24 19:10:00.442: INFO: Container setsysctls ready: true, restart count 0 May 24 19:10:00.442: INFO: speaker-bcz47 from metallb-system started at 2021-05-22 15:26:37 +0000 UTC (1 container statuses recorded) May 24 19:10:00.442: INFO: Container speaker ready: true, restart count 0 May 24 19:10:00.442: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test May 24 19:10:00.449: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) May 24 19:10:00.449: INFO: Container loopdev ready: true, restart count 0 May 24 19:10:00.449: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 19:10:00.449: INFO: Container kindnet-cni ready: true, restart count 13 May 24 19:10:00.449: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 19:10:00.449: INFO: Container kube-multus ready: true, restart count 1 May 24 19:10:00.449: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 19:10:00.449: INFO: Container kube-proxy ready: true, restart count 0 May 24 19:10:00.449: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 19:10:00.449: INFO: Container setsysctls ready: true, restart count 0 May 24 19:10:00.449: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) May 24 19:10:00.449: INFO: Container controller ready: true, restart count 0 May 24 19:10:00.449: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) May 24 19:10:00.449: INFO: Container speaker ready: true, restart count 0 May 24 19:10:00.449: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) May 24 19:10:00.449: INFO: Container contour ready: true, restart count 0 May 24 19:10:00.449: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) May 24 19:10:00.449: INFO: Container contour ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-26eeda9d-4140-4e4d-bb81-0f69dc5175a8 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.18.0.7 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-26eeda9d-4140-4e4d-bb81-0f69dc5175a8 off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-26eeda9d-4140-4e4d-bb81-0f69dc5175a8 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:15:04.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6299" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:304.197 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":18,"completed":1,"skipped":574,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:15:04.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:15:04.631: INFO: Create a RollingUpdate DaemonSet May 24 19:15:04.636: INFO: Check that daemon pods launch on every node of the cluster May 24 19:15:04.641: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:04.644: INFO: Number of nodes with available pods: 0 May 24 19:15:04.644: INFO: Node leguer-worker is running more than one daemon pod May 24 19:15:05.650: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:05.654: INFO: Number of nodes with available pods: 0 May 24 19:15:05.654: INFO: Node leguer-worker is running more than one daemon pod May 24 19:15:06.650: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:06.654: INFO: Number of nodes with available pods: 2 May 24 19:15:06.654: INFO: Number of running nodes: 2, number of available pods: 2 May 24 19:15:06.654: INFO: Update the DaemonSet to trigger a rollout May 24 19:15:06.664: INFO: Updating DaemonSet daemon-set May 24 19:15:10.682: INFO: Roll back the DaemonSet before rollout is complete May 24 19:15:10.692: INFO: Updating DaemonSet daemon-set May 24 19:15:10.692: INFO: Make sure DaemonSet rollback is complete May 24 19:15:10.696: INFO: Wrong image for pod: daemon-set-tnjk7. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 24 19:15:10.696: INFO: Pod daemon-set-tnjk7 is not available May 24 19:15:10.701: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:11.706: INFO: Wrong image for pod: daemon-set-tnjk7. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 24 19:15:11.706: INFO: Pod daemon-set-tnjk7 is not available May 24 19:15:11.711: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:12.705: INFO: Pod daemon-set-z9962 is not available May 24 19:15:12.711: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8114, will wait for the garbage collector to delete the pods May 24 19:15:12.784: INFO: Deleting DaemonSet.extensions daemon-set took: 5.901038ms May 24 19:15:13.484: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.114515ms May 24 19:15:27.988: INFO: Number of nodes with available pods: 0 May 24 19:15:27.988: INFO: Number of running nodes: 0, number of available pods: 0 May 24 19:15:27.995: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"833204"},"items":null} May 24 19:15:27.999: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"833204"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:15:28.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8114" for this suite. • [SLOW TEST:23.451 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":18,"completed":2,"skipped":1088,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:15:28.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:15:28.086: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 24 19:15:28.095: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:28.098: INFO: Number of nodes with available pods: 0 May 24 19:15:28.098: INFO: Node leguer-worker is running more than one daemon pod May 24 19:15:29.107: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:29.122: INFO: Number of nodes with available pods: 0 May 24 19:15:29.122: INFO: Node leguer-worker is running more than one daemon pod May 24 19:15:30.103: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:30.107: INFO: Number of nodes with available pods: 0 May 24 19:15:30.107: INFO: Node leguer-worker is running more than one daemon pod May 24 19:15:31.123: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:31.326: INFO: Number of nodes with available pods: 2 May 24 19:15:31.326: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 24 19:15:31.434: INFO: Wrong image for pod: daemon-set-nzrk9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. May 24 19:15:31.434: INFO: Wrong image for pod: daemon-set-ph4bt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. May 24 19:15:31.540: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:32.545: INFO: Wrong image for pod: daemon-set-nzrk9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. May 24 19:15:32.545: INFO: Wrong image for pod: daemon-set-ph4bt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. May 24 19:15:32.550: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:33.626: INFO: Wrong image for pod: daemon-set-nzrk9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. May 24 19:15:33.626: INFO: Wrong image for pod: daemon-set-ph4bt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. May 24 19:15:33.626: INFO: Pod daemon-set-ph4bt is not available May 24 19:15:33.631: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:34.545: INFO: Pod daemon-set-hfls5 is not available May 24 19:15:34.545: INFO: Wrong image for pod: daemon-set-nzrk9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. May 24 19:15:34.550: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:35.548: INFO: Wrong image for pod: daemon-set-nzrk9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. May 24 19:15:35.552: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:36.544: INFO: Wrong image for pod: daemon-set-nzrk9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. May 24 19:15:36.545: INFO: Pod daemon-set-nzrk9 is not available May 24 19:15:36.550: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:37.545: INFO: Wrong image for pod: daemon-set-nzrk9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. May 24 19:15:37.545: INFO: Pod daemon-set-nzrk9 is not available May 24 19:15:37.549: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:38.545: INFO: Wrong image for pod: daemon-set-nzrk9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. May 24 19:15:38.545: INFO: Pod daemon-set-nzrk9 is not available May 24 19:15:38.550: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:39.545: INFO: Wrong image for pod: daemon-set-nzrk9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. May 24 19:15:39.545: INFO: Pod daemon-set-nzrk9 is not available May 24 19:15:39.550: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:40.544: INFO: Wrong image for pod: daemon-set-nzrk9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. May 24 19:15:40.544: INFO: Pod daemon-set-nzrk9 is not available May 24 19:15:40.549: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:41.545: INFO: Wrong image for pod: daemon-set-nzrk9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. May 24 19:15:41.545: INFO: Pod daemon-set-nzrk9 is not available May 24 19:15:41.549: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:42.545: INFO: Wrong image for pod: daemon-set-nzrk9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. May 24 19:15:42.545: INFO: Pod daemon-set-nzrk9 is not available May 24 19:15:42.550: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:43.545: INFO: Wrong image for pod: daemon-set-nzrk9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. May 24 19:15:43.545: INFO: Pod daemon-set-nzrk9 is not available May 24 19:15:43.549: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:44.545: INFO: Wrong image for pod: daemon-set-nzrk9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. May 24 19:15:44.545: INFO: Pod daemon-set-nzrk9 is not available May 24 19:15:44.550: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:45.545: INFO: Wrong image for pod: daemon-set-nzrk9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. May 24 19:15:45.545: INFO: Pod daemon-set-nzrk9 is not available May 24 19:15:45.549: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:46.544: INFO: Wrong image for pod: daemon-set-nzrk9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. May 24 19:15:46.544: INFO: Pod daemon-set-nzrk9 is not available May 24 19:15:46.549: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:47.545: INFO: Wrong image for pod: daemon-set-nzrk9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. May 24 19:15:47.545: INFO: Pod daemon-set-nzrk9 is not available May 24 19:15:47.549: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:48.545: INFO: Pod daemon-set-qps96 is not available May 24 19:15:48.550: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 24 19:15:48.554: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:48.558: INFO: Number of nodes with available pods: 1 May 24 19:15:48.558: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:15:49.626: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:49.630: INFO: Number of nodes with available pods: 1 May 24 19:15:49.630: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:15:50.564: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:15:50.568: INFO: Number of nodes with available pods: 2 May 24 19:15:50.568: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5631, will wait for the garbage collector to delete the pods May 24 19:15:50.646: INFO: Deleting DaemonSet.extensions daemon-set took: 6.979766ms May 24 19:15:51.346: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.2893ms May 24 19:15:57.949: INFO: Number of nodes with available pods: 0 May 24 19:15:57.949: INFO: Number of running nodes: 0, number of available pods: 0 May 24 19:15:57.952: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"833403"},"items":null} May 24 19:15:57.955: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"833403"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:15:57.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5631" for this suite. • [SLOW TEST:29.950 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":18,"completed":3,"skipped":1319,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:15:57.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 24 19:15:58.021: INFO: Waiting up to 1m0s for all nodes to be ready May 24 19:16:58.068: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create pods that use 2/3 of node resources. May 24 19:16:58.097: INFO: Created pod: pod0-sched-preemption-low-priority May 24 19:16:58.113: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:17:16.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9958" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:78.240 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":18,"completed":4,"skipped":1395,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:17:16.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 24 19:17:16.289: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:17:16.292: INFO: Number of nodes with available pods: 0 May 24 19:17:16.292: INFO: Node leguer-worker is running more than one daemon pod May 24 19:17:17.299: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:17:17.303: INFO: Number of nodes with available pods: 0 May 24 19:17:17.303: INFO: Node leguer-worker is running more than one daemon pod May 24 19:17:18.298: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:17:18.302: INFO: Number of nodes with available pods: 2 May 24 19:17:18.302: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 24 19:17:18.321: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:17:18.327: INFO: Number of nodes with available pods: 1 May 24 19:17:18.327: INFO: Node leguer-worker is running more than one daemon pod May 24 19:17:19.333: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:17:19.337: INFO: Number of nodes with available pods: 1 May 24 19:17:19.337: INFO: Node leguer-worker is running more than one daemon pod May 24 19:17:20.333: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:17:20.337: INFO: Number of nodes with available pods: 2 May 24 19:17:20.337: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7654, will wait for the garbage collector to delete the pods May 24 19:17:20.403: INFO: Deleting DaemonSet.extensions daemon-set took: 6.765896ms May 24 19:17:21.103: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.261392ms May 24 19:17:28.007: INFO: Number of nodes with available pods: 0 May 24 19:17:28.007: INFO: Number of running nodes: 0, number of available pods: 0 May 24 19:17:28.010: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"833806"},"items":null} May 24 19:17:28.013: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"833806"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:17:28.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7654" for this suite. • [SLOW TEST:11.813 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":18,"completed":5,"skipped":1738,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:17:28.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 24 19:17:28.096: INFO: Waiting up to 1m0s for all nodes to be ready May 24 19:18:28.139: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create pods that use 2/3 of node resources. May 24 19:18:28.164: INFO: Created pod: pod0-sched-preemption-low-priority May 24 19:18:28.228: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:18:40.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1313" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:72.287 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":18,"completed":6,"skipped":2362,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:18:40.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:18:40.393: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 24 19:18:40.400: INFO: Number of nodes with available pods: 0 May 24 19:18:40.400: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 24 19:18:40.417: INFO: Number of nodes with available pods: 0 May 24 19:18:40.417: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:18:41.422: INFO: Number of nodes with available pods: 0 May 24 19:18:41.422: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:18:42.422: INFO: Number of nodes with available pods: 1 May 24 19:18:42.422: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 24 19:18:42.441: INFO: Number of nodes with available pods: 1 May 24 19:18:42.441: INFO: Number of running nodes: 0, number of available pods: 1 May 24 19:18:43.445: INFO: Number of nodes with available pods: 0 May 24 19:18:43.445: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 24 19:18:43.456: INFO: Number of nodes with available pods: 0 May 24 19:18:43.456: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:18:44.460: INFO: Number of nodes with available pods: 0 May 24 19:18:44.460: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:18:45.460: INFO: Number of nodes with available pods: 0 May 24 19:18:45.460: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:18:46.460: INFO: Number of nodes with available pods: 0 May 24 19:18:46.460: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:18:47.460: INFO: Number of nodes with available pods: 0 May 24 19:18:47.460: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:18:48.460: INFO: Number of nodes with available pods: 0 May 24 19:18:48.460: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:18:49.460: INFO: Number of nodes with available pods: 0 May 24 19:18:49.460: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:18:50.461: INFO: Number of nodes with available pods: 1 May 24 19:18:50.461: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9851, will wait for the garbage collector to delete the pods May 24 19:18:50.527: INFO: Deleting DaemonSet.extensions daemon-set took: 6.918214ms May 24 19:18:51.228: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.289015ms May 24 19:18:58.131: INFO: Number of nodes with available pods: 0 May 24 19:18:58.131: INFO: Number of running nodes: 0, number of available pods: 0 May 24 19:18:58.134: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"834196"},"items":null} May 24 19:18:58.136: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"834196"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:18:58.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9851" for this suite. • [SLOW TEST:17.833 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":18,"completed":7,"skipped":2515,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:18:58.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 May 24 19:18:58.210: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 19:18:58.218: INFO: Waiting for terminating namespaces to be deleted... May 24 19:18:58.222: INFO: Logging pods the apiserver thinks is on node leguer-worker before test May 24 19:18:58.231: INFO: coredns-74ff55c5b-hpl9v from kube-system started at 2021-05-22 15:45:42 +0000 UTC (1 container statuses recorded) May 24 19:18:58.231: INFO: Container coredns ready: true, restart count 0 May 24 19:18:58.231: INFO: coredns-74ff55c5b-wk7kb from kube-system started at 2021-05-22 15:45:42 +0000 UTC (1 container statuses recorded) May 24 19:18:58.231: INFO: Container coredns ready: true, restart count 0 May 24 19:18:58.231: INFO: create-loop-devs-9cxgz from kube-system started at 2021-05-22 15:27:08 +0000 UTC (1 container statuses recorded) May 24 19:18:58.231: INFO: Container loopdev ready: true, restart count 0 May 24 19:18:58.231: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 19:18:58.231: INFO: Container kindnet-cni ready: true, restart count 13 May 24 19:18:58.231: INFO: kube-multus-ds-ptbx9 from kube-system started at 2021-05-22 15:26:47 +0000 UTC (1 container statuses recorded) May 24 19:18:58.231: INFO: Container kube-multus ready: true, restart count 0 May 24 19:18:58.231: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 19:18:58.231: INFO: Container kube-proxy ready: true, restart count 0 May 24 19:18:58.231: INFO: tune-sysctls-twdb5 from kube-system started at 2021-05-22 15:26:37 +0000 UTC (1 container statuses recorded) May 24 19:18:58.231: INFO: Container setsysctls ready: true, restart count 0 May 24 19:18:58.231: INFO: speaker-bcz47 from metallb-system started at 2021-05-22 15:26:37 +0000 UTC (1 container statuses recorded) May 24 19:18:58.231: INFO: Container speaker ready: true, restart count 0 May 24 19:18:58.231: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test May 24 19:18:58.239: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) May 24 19:18:58.239: INFO: Container loopdev ready: true, restart count 0 May 24 19:18:58.239: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 19:18:58.239: INFO: Container kindnet-cni ready: true, restart count 13 May 24 19:18:58.239: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 19:18:58.239: INFO: Container kube-multus ready: true, restart count 1 May 24 19:18:58.239: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 19:18:58.239: INFO: Container kube-proxy ready: true, restart count 0 May 24 19:18:58.239: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 19:18:58.239: INFO: Container setsysctls ready: true, restart count 0 May 24 19:18:58.239: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) May 24 19:18:58.239: INFO: Container controller ready: true, restart count 0 May 24 19:18:58.239: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) May 24 19:18:58.239: INFO: Container speaker ready: true, restart count 0 May 24 19:18:58.239: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) May 24 19:18:58.239: INFO: Container contour ready: true, restart count 0 May 24 19:18:58.239: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) May 24 19:18:58.239: INFO: Container contour ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-6e871759-ea1e-4aa4-9bbb-961a8235ab62 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.18.0.7 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.18.0.7 but use UDP protocol on the node which pod2 resides STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 May 24 19:19:08.343: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.7 http://127.0.0.1:54321/hostname] Namespace:sched-pred-1468 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:19:08.343: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.7, port: 54321 May 24 19:19:08.516: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.7:54321/hostname] Namespace:sched-pred-1468 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:19:08.516: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.7, port: 54321 UDP May 24 19:19:08.634: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.7 54321] Namespace:sched-pred-1468 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:19:08.634: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 May 24 19:19:13.756: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.7 http://127.0.0.1:54321/hostname] Namespace:sched-pred-1468 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:19:13.756: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.7, port: 54321 May 24 19:19:13.891: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.7:54321/hostname] Namespace:sched-pred-1468 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:19:13.891: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.7, port: 54321 UDP May 24 19:19:14.009: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.7 54321] Namespace:sched-pred-1468 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:19:14.009: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 May 24 19:19:19.127: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.7 http://127.0.0.1:54321/hostname] Namespace:sched-pred-1468 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:19:19.127: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.7, port: 54321 May 24 19:19:19.277: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.7:54321/hostname] Namespace:sched-pred-1468 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:19:19.277: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.7, port: 54321 UDP May 24 19:19:19.398: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.7 54321] Namespace:sched-pred-1468 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:19:19.398: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 May 24 19:19:24.512: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.7 http://127.0.0.1:54321/hostname] Namespace:sched-pred-1468 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:19:24.512: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.7, port: 54321 May 24 19:19:24.656: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.7:54321/hostname] Namespace:sched-pred-1468 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:19:24.656: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.7, port: 54321 UDP May 24 19:19:24.784: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.7 54321] Namespace:sched-pred-1468 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:19:24.784: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 May 24 19:19:29.908: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.7 http://127.0.0.1:54321/hostname] Namespace:sched-pred-1468 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:19:29.908: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.7, port: 54321 May 24 19:19:30.053: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.7:54321/hostname] Namespace:sched-pred-1468 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:19:30.054: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.7, port: 54321 UDP May 24 19:19:30.179: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.7 54321] Namespace:sched-pred-1468 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 24 19:19:30.179: INFO: >>> kubeConfig: /root/.kube/config STEP: removing the label kubernetes.io/e2e-6e871759-ea1e-4aa4-9bbb-961a8235ab62 off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-6e871759-ea1e-4aa4-9bbb-961a8235ab62 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:19:35.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1468" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:37.160 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":18,"completed":8,"skipped":2960,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:19:35.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 May 24 19:19:35.378: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 19:19:35.386: INFO: Waiting for terminating namespaces to be deleted... May 24 19:19:35.390: INFO: Logging pods the apiserver thinks is on node leguer-worker before test May 24 19:19:35.400: INFO: coredns-74ff55c5b-hpl9v from kube-system started at 2021-05-22 15:45:42 +0000 UTC (1 container statuses recorded) May 24 19:19:35.400: INFO: Container coredns ready: true, restart count 0 May 24 19:19:35.400: INFO: coredns-74ff55c5b-wk7kb from kube-system started at 2021-05-22 15:45:42 +0000 UTC (1 container statuses recorded) May 24 19:19:35.400: INFO: Container coredns ready: true, restart count 0 May 24 19:19:35.400: INFO: create-loop-devs-9cxgz from kube-system started at 2021-05-22 15:27:08 +0000 UTC (1 container statuses recorded) May 24 19:19:35.400: INFO: Container loopdev ready: true, restart count 0 May 24 19:19:35.400: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 19:19:35.400: INFO: Container kindnet-cni ready: true, restart count 13 May 24 19:19:35.400: INFO: kube-multus-ds-ptbx9 from kube-system started at 2021-05-22 15:26:47 +0000 UTC (1 container statuses recorded) May 24 19:19:35.400: INFO: Container kube-multus ready: true, restart count 0 May 24 19:19:35.400: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 19:19:35.400: INFO: Container kube-proxy ready: true, restart count 0 May 24 19:19:35.400: INFO: tune-sysctls-twdb5 from kube-system started at 2021-05-22 15:26:37 +0000 UTC (1 container statuses recorded) May 24 19:19:35.400: INFO: Container setsysctls ready: true, restart count 0 May 24 19:19:35.400: INFO: speaker-bcz47 from metallb-system started at 2021-05-22 15:26:37 +0000 UTC (1 container statuses recorded) May 24 19:19:35.400: INFO: Container speaker ready: true, restart count 0 May 24 19:19:35.400: INFO: e2e-host-exec from sched-pred-1468 started at 2021-05-24 19:19:06 +0000 UTC (1 container statuses recorded) May 24 19:19:35.400: INFO: Container e2e-host-exec ready: true, restart count 0 May 24 19:19:35.400: INFO: pod1 from sched-pred-1468 started at 2021-05-24 19:19:00 +0000 UTC (1 container statuses recorded) May 24 19:19:35.400: INFO: Container agnhost ready: true, restart count 0 May 24 19:19:35.400: INFO: pod2 from sched-pred-1468 started at 2021-05-24 19:19:02 +0000 UTC (1 container statuses recorded) May 24 19:19:35.400: INFO: Container agnhost ready: true, restart count 0 May 24 19:19:35.400: INFO: pod3 from sched-pred-1468 started at 2021-05-24 19:19:04 +0000 UTC (1 container statuses recorded) May 24 19:19:35.400: INFO: Container agnhost ready: true, restart count 0 May 24 19:19:35.400: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test May 24 19:19:35.408: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) May 24 19:19:35.408: INFO: Container loopdev ready: true, restart count 0 May 24 19:19:35.408: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 19:19:35.408: INFO: Container kindnet-cni ready: true, restart count 13 May 24 19:19:35.408: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 19:19:35.408: INFO: Container kube-multus ready: true, restart count 1 May 24 19:19:35.408: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 19:19:35.408: INFO: Container kube-proxy ready: true, restart count 0 May 24 19:19:35.408: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 19:19:35.408: INFO: Container setsysctls ready: true, restart count 0 May 24 19:19:35.408: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) May 24 19:19:35.408: INFO: Container controller ready: true, restart count 0 May 24 19:19:35.408: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) May 24 19:19:35.408: INFO: Container speaker ready: true, restart count 0 May 24 19:19:35.408: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) May 24 19:19:35.408: INFO: Container contour ready: true, restart count 0 May 24 19:19:35.409: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) May 24 19:19:35.409: INFO: Container contour ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e0a880e3-edea-4cc7-a761-1c98ee43e49c 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-e0a880e3-edea-4cc7-a761-1c98ee43e49c off the node leguer-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-e0a880e3-edea-4cc7-a761-1c98ee43e49c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:19:41.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7535" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:6.145 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":18,"completed":9,"skipped":3381,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:19:41.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:19:41.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8987" for this suite. STEP: Destroying namespace "nspatchtest-d974156f-7d90-4f1c-8b4a-d6849c921397-5630" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":18,"completed":10,"skipped":3499,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:19:41.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 May 24 19:19:41.599: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 19:19:41.608: INFO: Waiting for terminating namespaces to be deleted... May 24 19:19:41.611: INFO: Logging pods the apiserver thinks is on node leguer-worker before test May 24 19:19:41.620: INFO: coredns-74ff55c5b-hpl9v from kube-system started at 2021-05-22 15:45:42 +0000 UTC (1 container statuses recorded) May 24 19:19:41.620: INFO: Container coredns ready: true, restart count 0 May 24 19:19:41.620: INFO: coredns-74ff55c5b-wk7kb from kube-system started at 2021-05-22 15:45:42 +0000 UTC (1 container statuses recorded) May 24 19:19:41.620: INFO: Container coredns ready: true, restart count 0 May 24 19:19:41.620: INFO: create-loop-devs-9cxgz from kube-system started at 2021-05-22 15:27:08 +0000 UTC (1 container statuses recorded) May 24 19:19:41.620: INFO: Container loopdev ready: true, restart count 0 May 24 19:19:41.620: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 19:19:41.620: INFO: Container kindnet-cni ready: true, restart count 13 May 24 19:19:41.620: INFO: kube-multus-ds-ptbx9 from kube-system started at 2021-05-22 15:26:47 +0000 UTC (1 container statuses recorded) May 24 19:19:41.620: INFO: Container kube-multus ready: true, restart count 0 May 24 19:19:41.620: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 19:19:41.620: INFO: Container kube-proxy ready: true, restart count 0 May 24 19:19:41.620: INFO: tune-sysctls-twdb5 from kube-system started at 2021-05-22 15:26:37 +0000 UTC (1 container statuses recorded) May 24 19:19:41.620: INFO: Container setsysctls ready: true, restart count 0 May 24 19:19:41.620: INFO: speaker-bcz47 from metallb-system started at 2021-05-22 15:26:37 +0000 UTC (1 container statuses recorded) May 24 19:19:41.620: INFO: Container speaker ready: true, restart count 0 May 24 19:19:41.620: INFO: e2e-host-exec from sched-pred-1468 started at 2021-05-24 19:19:06 +0000 UTC (1 container statuses recorded) May 24 19:19:41.620: INFO: Container e2e-host-exec ready: true, restart count 0 May 24 19:19:41.620: INFO: pod1 from sched-pred-1468 started at 2021-05-24 19:19:00 +0000 UTC (1 container statuses recorded) May 24 19:19:41.620: INFO: Container agnhost ready: true, restart count 0 May 24 19:19:41.620: INFO: pod2 from sched-pred-1468 started at 2021-05-24 19:19:02 +0000 UTC (1 container statuses recorded) May 24 19:19:41.620: INFO: Container agnhost ready: true, restart count 0 May 24 19:19:41.620: INFO: pod3 from sched-pred-1468 started at 2021-05-24 19:19:04 +0000 UTC (1 container statuses recorded) May 24 19:19:41.620: INFO: Container agnhost ready: true, restart count 0 May 24 19:19:41.620: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test May 24 19:19:41.629: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) May 24 19:19:41.629: INFO: Container loopdev ready: true, restart count 0 May 24 19:19:41.629: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 19:19:41.629: INFO: Container kindnet-cni ready: true, restart count 13 May 24 19:19:41.629: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 19:19:41.629: INFO: Container kube-multus ready: true, restart count 1 May 24 19:19:41.629: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 19:19:41.629: INFO: Container kube-proxy ready: true, restart count 0 May 24 19:19:41.629: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 19:19:41.629: INFO: Container setsysctls ready: true, restart count 0 May 24 19:19:41.629: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) May 24 19:19:41.630: INFO: Container controller ready: true, restart count 0 May 24 19:19:41.630: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) May 24 19:19:41.630: INFO: Container speaker ready: true, restart count 0 May 24 19:19:41.630: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) May 24 19:19:41.630: INFO: Container contour ready: true, restart count 0 May 24 19:19:41.630: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) May 24 19:19:41.630: INFO: Container contour ready: true, restart count 0 May 24 19:19:41.630: INFO: with-labels from sched-pred-7535 started at 2021-05-24 19:19:37 +0000 UTC (1 container statuses recorded) May 24 19:19:41.630: INFO: Container with-labels ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.168216e361c6de93], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:19:42.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6617" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":18,"completed":11,"skipped":3518,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:19:42.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 24 19:19:42.987: INFO: Pod name wrapped-volume-race-02555e62-39e4-405b-8ff7-033fbaa84200: Found 3 pods out of 5 May 24 19:19:47.995: INFO: Pod name wrapped-volume-race-02555e62-39e4-405b-8ff7-033fbaa84200: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-02555e62-39e4-405b-8ff7-033fbaa84200 in namespace emptydir-wrapper-7426, will wait for the garbage collector to delete the pods May 24 19:19:58.084: INFO: Deleting ReplicationController wrapped-volume-race-02555e62-39e4-405b-8ff7-033fbaa84200 took: 8.148833ms May 24 19:19:58.784: INFO: Terminating ReplicationController wrapped-volume-race-02555e62-39e4-405b-8ff7-033fbaa84200 pods took: 700.26461ms STEP: Creating RC which spawns configmap-volume pods May 24 19:20:08.003: INFO: Pod name wrapped-volume-race-5863c424-1d87-4119-b225-f26b13245e34: Found 0 pods out of 5 May 24 19:20:13.012: INFO: Pod name wrapped-volume-race-5863c424-1d87-4119-b225-f26b13245e34: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5863c424-1d87-4119-b225-f26b13245e34 in namespace emptydir-wrapper-7426, will wait for the garbage collector to delete the pods May 24 19:20:23.100: INFO: Deleting ReplicationController wrapped-volume-race-5863c424-1d87-4119-b225-f26b13245e34 took: 7.814396ms May 24 19:20:23.800: INFO: Terminating ReplicationController wrapped-volume-race-5863c424-1d87-4119-b225-f26b13245e34 pods took: 700.2536ms STEP: Creating RC which spawns configmap-volume pods May 24 19:20:28.123: INFO: Pod name wrapped-volume-race-8df47ca5-a01f-4351-9a25-de26083b6931: Found 0 pods out of 5 May 24 19:20:33.137: INFO: Pod name wrapped-volume-race-8df47ca5-a01f-4351-9a25-de26083b6931: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8df47ca5-a01f-4351-9a25-de26083b6931 in namespace emptydir-wrapper-7426, will wait for the garbage collector to delete the pods May 24 19:20:43.228: INFO: Deleting ReplicationController wrapped-volume-race-8df47ca5-a01f-4351-9a25-de26083b6931 took: 8.400764ms May 24 19:20:43.928: INFO: Terminating ReplicationController wrapped-volume-race-8df47ca5-a01f-4351-9a25-de26083b6931 pods took: 700.253169ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:20:48.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7426" for this suite. • [SLOW TEST:65.636 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":18,"completed":12,"skipped":3605,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:20:48.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 May 24 19:20:48.353: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 19:20:48.362: INFO: Waiting for terminating namespaces to be deleted... May 24 19:20:48.365: INFO: Logging pods the apiserver thinks is on node leguer-worker before test May 24 19:20:48.374: INFO: coredns-74ff55c5b-hpl9v from kube-system started at 2021-05-22 15:45:42 +0000 UTC (1 container statuses recorded) May 24 19:20:48.374: INFO: Container coredns ready: true, restart count 0 May 24 19:20:48.374: INFO: coredns-74ff55c5b-wk7kb from kube-system started at 2021-05-22 15:45:42 +0000 UTC (1 container statuses recorded) May 24 19:20:48.374: INFO: Container coredns ready: true, restart count 0 May 24 19:20:48.374: INFO: create-loop-devs-9cxgz from kube-system started at 2021-05-22 15:27:08 +0000 UTC (1 container statuses recorded) May 24 19:20:48.374: INFO: Container loopdev ready: true, restart count 0 May 24 19:20:48.374: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 19:20:48.374: INFO: Container kindnet-cni ready: true, restart count 13 May 24 19:20:48.374: INFO: kube-multus-ds-ptbx9 from kube-system started at 2021-05-22 15:26:47 +0000 UTC (1 container statuses recorded) May 24 19:20:48.374: INFO: Container kube-multus ready: true, restart count 0 May 24 19:20:48.374: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 19:20:48.374: INFO: Container kube-proxy ready: true, restart count 0 May 24 19:20:48.374: INFO: tune-sysctls-twdb5 from kube-system started at 2021-05-22 15:26:37 +0000 UTC (1 container statuses recorded) May 24 19:20:48.374: INFO: Container setsysctls ready: true, restart count 0 May 24 19:20:48.374: INFO: speaker-bcz47 from metallb-system started at 2021-05-22 15:26:37 +0000 UTC (1 container statuses recorded) May 24 19:20:48.374: INFO: Container speaker ready: true, restart count 0 May 24 19:20:48.374: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test May 24 19:20:48.382: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) May 24 19:20:48.382: INFO: Container loopdev ready: true, restart count 0 May 24 19:20:48.382: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 19:20:48.382: INFO: Container kindnet-cni ready: true, restart count 13 May 24 19:20:48.382: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 19:20:48.382: INFO: Container kube-multus ready: true, restart count 1 May 24 19:20:48.382: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 19:20:48.382: INFO: Container kube-proxy ready: true, restart count 0 May 24 19:20:48.382: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 19:20:48.382: INFO: Container setsysctls ready: true, restart count 0 May 24 19:20:48.382: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) May 24 19:20:48.382: INFO: Container controller ready: true, restart count 0 May 24 19:20:48.382: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) May 24 19:20:48.382: INFO: Container speaker ready: true, restart count 0 May 24 19:20:48.382: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) May 24 19:20:48.382: INFO: Container contour ready: true, restart count 0 May 24 19:20:48.382: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) May 24 19:20:48.382: INFO: Container contour ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: verifying the node has the label node leguer-worker STEP: verifying the node has the label node leguer-worker2 May 24 19:20:48.436: INFO: Pod coredns-74ff55c5b-hpl9v requesting resource cpu=100m on Node leguer-worker May 24 19:20:48.436: INFO: Pod coredns-74ff55c5b-wk7kb requesting resource cpu=100m on Node leguer-worker May 24 19:20:48.436: INFO: Pod create-loop-devs-9cxgz requesting resource cpu=0m on Node leguer-worker May 24 19:20:48.436: INFO: Pod create-loop-devs-nbf25 requesting resource cpu=0m on Node leguer-worker2 May 24 19:20:48.436: INFO: Pod kindnet-kx9mk requesting resource cpu=100m on Node leguer-worker2 May 24 19:20:48.436: INFO: Pod kindnet-svp2q requesting resource cpu=100m on Node leguer-worker May 24 19:20:48.436: INFO: Pod kube-multus-ds-n48bs requesting resource cpu=100m on Node leguer-worker2 May 24 19:20:48.436: INFO: Pod kube-multus-ds-ptbx9 requesting resource cpu=100m on Node leguer-worker May 24 19:20:48.436: INFO: Pod kube-proxy-7g274 requesting resource cpu=0m on Node leguer-worker May 24 19:20:48.436: INFO: Pod kube-proxy-mp68m requesting resource cpu=0m on Node leguer-worker2 May 24 19:20:48.436: INFO: Pod tune-sysctls-twdb5 requesting resource cpu=0m on Node leguer-worker May 24 19:20:48.436: INFO: Pod tune-sysctls-vjdll requesting resource cpu=0m on Node leguer-worker2 May 24 19:20:48.436: INFO: Pod controller-675995489c-h2wms requesting resource cpu=0m on Node leguer-worker2 May 24 19:20:48.436: INFO: Pod speaker-55zcr requesting resource cpu=0m on Node leguer-worker2 May 24 19:20:48.436: INFO: Pod speaker-bcz47 requesting resource cpu=0m on Node leguer-worker May 24 19:20:48.436: INFO: Pod contour-6648989f79-2vldk requesting resource cpu=0m on Node leguer-worker2 May 24 19:20:48.436: INFO: Pod contour-6648989f79-8gz4z requesting resource cpu=0m on Node leguer-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 24 19:20:48.436: INFO: Creating a pod which consumes cpu=61320m on Node leguer-worker May 24 19:20:48.447: INFO: Creating a pod which consumes cpu=61460m on Node leguer-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-3143f51d-4345-4ff7-86b6-43be9f1a9a7a.168216f2eed5367d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5619/filler-pod-3143f51d-4345-4ff7-86b6-43be9f1a9a7a to leguer-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-3143f51d-4345-4ff7-86b6-43be9f1a9a7a.168216f310b6b962], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.242/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-3143f51d-4345-4ff7-86b6-43be9f1a9a7a.168216f31c49433b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-3143f51d-4345-4ff7-86b6-43be9f1a9a7a.168216f33542724b], Reason = [Created], Message = [Created container filler-pod-3143f51d-4345-4ff7-86b6-43be9f1a9a7a] STEP: Considering event: Type = [Normal], Name = [filler-pod-3143f51d-4345-4ff7-86b6-43be9f1a9a7a.168216f33e7c1831], Reason = [Started], Message = [Started container filler-pod-3143f51d-4345-4ff7-86b6-43be9f1a9a7a] STEP: Considering event: Type = [Normal], Name = [filler-pod-950ce259-6a35-410c-9c8a-e17e6d63fba9.168216f2ef1118ad], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5619/filler-pod-950ce259-6a35-410c-9c8a-e17e6d63fba9 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-950ce259-6a35-410c-9c8a-e17e6d63fba9.168216f30fc4b3ca], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.125/24]] STEP: Considering event: Type = [Warning], Name = [additional-pod.168216f3675cc910], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node leguer-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node leguer-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:20:52.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5619" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":18,"completed":13,"skipped":3802,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:20:52.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:21:21.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5809" for this suite. STEP: Destroying namespace "nsdeletetest-7958" for this suite. May 24 19:21:21.572: INFO: Namespace nsdeletetest-7958 was already deleted STEP: Destroying namespace "nsdeletetest-9015" for this suite. • [SLOW TEST:29.132 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":18,"completed":14,"skipped":4657,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:21:21.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 24 19:21:21.636: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:21:21.639: INFO: Number of nodes with available pods: 0 May 24 19:21:21.639: INFO: Node leguer-worker is running more than one daemon pod May 24 19:21:22.644: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:21:22.648: INFO: Number of nodes with available pods: 0 May 24 19:21:22.648: INFO: Node leguer-worker is running more than one daemon pod May 24 19:21:23.644: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:21:23.648: INFO: Number of nodes with available pods: 2 May 24 19:21:23.648: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 24 19:21:23.727: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:21:23.738: INFO: Number of nodes with available pods: 1 May 24 19:21:23.738: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:21:24.743: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:21:24.747: INFO: Number of nodes with available pods: 1 May 24 19:21:24.747: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:21:25.743: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:21:25.747: INFO: Number of nodes with available pods: 1 May 24 19:21:25.747: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:21:26.742: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:21:26.745: INFO: Number of nodes with available pods: 1 May 24 19:21:26.745: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:21:27.744: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:21:27.748: INFO: Number of nodes with available pods: 1 May 24 19:21:27.748: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:21:28.743: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:21:28.747: INFO: Number of nodes with available pods: 1 May 24 19:21:28.747: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:21:29.744: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:21:29.748: INFO: Number of nodes with available pods: 1 May 24 19:21:29.748: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:21:30.743: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:21:30.747: INFO: Number of nodes with available pods: 1 May 24 19:21:30.747: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:21:31.743: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:21:31.747: INFO: Number of nodes with available pods: 1 May 24 19:21:31.747: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:21:32.743: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:21:32.747: INFO: Number of nodes with available pods: 1 May 24 19:21:32.747: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:21:33.743: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:21:33.747: INFO: Number of nodes with available pods: 1 May 24 19:21:33.747: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:21:34.743: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:21:34.746: INFO: Number of nodes with available pods: 1 May 24 19:21:34.746: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:21:35.744: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:21:35.748: INFO: Number of nodes with available pods: 1 May 24 19:21:35.748: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:21:36.743: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:21:36.747: INFO: Number of nodes with available pods: 1 May 24 19:21:36.747: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:21:37.744: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:21:37.748: INFO: Number of nodes with available pods: 1 May 24 19:21:37.749: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:21:38.743: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:21:38.747: INFO: Number of nodes with available pods: 1 May 24 19:21:38.747: INFO: Node leguer-worker2 is running more than one daemon pod May 24 19:21:39.745: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 19:21:39.754: INFO: Number of nodes with available pods: 2 May 24 19:21:39.754: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8560, will wait for the garbage collector to delete the pods May 24 19:21:39.819: INFO: Deleting DaemonSet.extensions daemon-set took: 7.201889ms May 24 19:21:40.519: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.203793ms May 24 19:21:47.922: INFO: Number of nodes with available pods: 0 May 24 19:21:47.922: INFO: Number of running nodes: 0, number of available pods: 0 May 24 19:21:47.925: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"835894"},"items":null} May 24 19:21:47.928: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"835894"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:21:47.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8560" for this suite. • [SLOW TEST:26.369 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":18,"completed":15,"skipped":4881,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:21:47.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:21:54.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2859" for this suite. STEP: Destroying namespace "nsdeletetest-7936" for this suite. May 24 19:21:54.072: INFO: Namespace nsdeletetest-7936 was already deleted STEP: Destroying namespace "nsdeletetest-2546" for this suite. • [SLOW TEST:6.125 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":18,"completed":16,"skipped":5046,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:21:54.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 24 19:21:54.120: INFO: Waiting up to 1m0s for all nodes to be ready May 24 19:22:54.166: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:22:54.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 24 19:22:56.241: INFO: found a healthy node: leguer-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:23:02.306: INFO: pods created so far: [1 1 1] May 24 19:23:02.306: INFO: length of pods created so far: 3 May 24 19:23:06.315: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:23:13.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-4280" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:23:13.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-287" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:79.337 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":18,"completed":17,"skipped":5147,"failed":0} [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:23:13.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 24 19:23:13.462: INFO: Waiting up to 1m0s for all nodes to be ready May 24 19:24:13.507: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 19:24:13.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 May 24 19:24:13.555: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. May 24 19:24:13.559: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:24:13.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-4685" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 19:24:13.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8545" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.322 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":18,"completed":18,"skipped":5147,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 24 19:24:13.746: INFO: Running AfterSuite actions on all nodes May 24 19:24:13.747: INFO: Running AfterSuite actions on node 1 May 24 19:24:13.747: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":18,"completed":18,"skipped":5649,"failed":0} Ran 18 of 5667 Specs in 853.489 seconds SUCCESS! -- 18 Passed | 0 Failed | 0 Pending | 5649 Skipped PASS Ginkgo ran 1 suite in 14m15.088620562s Test Suite Passed