I0521 16:06:27.075946 17 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0521 16:06:27.076182 17 e2e.go:129] Starting e2e run "322b3fd9-c2e0-4e8e-a78c-370b3b405bc3" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1621613185 - Will randomize all specs Will run 17 of 5484 specs May 21 16:06:27.171: INFO: >>> kubeConfig: /root/.kube/config May 21 16:06:27.180: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 21 16:06:27.203: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 21 16:06:27.251: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 21 16:06:27.251: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 21 16:06:27.251: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 21 16:06:27.262: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) May 21 16:06:27.262: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 21 16:06:27.262: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) May 21 16:06:27.262: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 21 16:06:27.262: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) May 21 16:06:27.262: INFO: e2e test version: v1.19.11 May 21 16:06:27.264: INFO: kube-apiserver version: v1.19.11 May 21 16:06:27.264: INFO: >>> kubeConfig: /root/.kube/config May 21 16:06:27.269: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:06:27.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption May 21 16:06:27.300: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 16:06:27.309: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 May 21 16:06:27.322: INFO: Waiting up to 1m0s for all nodes to be ready May 21 16:07:27.370: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. May 21 16:07:27.398: INFO: Created pod: pod0-sched-preemption-low-priority May 21 16:07:27.412: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:07:53.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7782" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:86.224 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":17,"completed":1,"skipped":203,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:07:53.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 21 16:07:53.536: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 21 16:07:53.544: INFO: Waiting for terminating namespaces to be deleted... May 21 16:07:53.547: INFO: Logging pods the apiserver thinks is on node kali-worker before test May 21 16:07:53.557: INFO: create-loop-devs-8l686 from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:07:53.557: INFO: Container loopdev ready: true, restart count 0 May 21 16:07:53.557: INFO: kindnet-vlqfv from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:07:53.557: INFO: Container kindnet-cni ready: true, restart count 0 May 21 16:07:53.557: INFO: kube-multus-ds-f4mr9 from kube-system started at 2021-05-21 15:16:02 +0000 UTC (1 container statuses recorded) May 21 16:07:53.557: INFO: Container kube-multus ready: true, restart count 2 May 21 16:07:53.557: INFO: kube-proxy-ggwmf from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:07:53.557: INFO: Container kube-proxy ready: true, restart count 0 May 21 16:07:53.557: INFO: tune-sysctls-8m4jc from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:07:53.557: INFO: Container setsysctls ready: true, restart count 0 May 21 16:07:53.557: INFO: dashboard-metrics-scraper-79c5968bdc-tfgzj from kubernetes-dashboard started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:07:53.557: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 21 16:07:53.557: INFO: speaker-x7d27 from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 16:07:53.557: INFO: Container speaker ready: true, restart count 0 May 21 16:07:53.557: INFO: contour-6648989f79-6s225 from projectcontour started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:07:53.557: INFO: Container contour ready: true, restart count 0 May 21 16:07:53.557: INFO: contour-certgen-v1.15.1-7m8mh from projectcontour started at 2021-05-21 15:16:04 +0000 UTC (1 container statuses recorded) May 21 16:07:53.557: INFO: Container contour ready: false, restart count 0 May 21 16:07:53.557: INFO: preemptor-pod from sched-preemption-7782 started at 2021-05-21 16:07:50 +0000 UTC (1 container statuses recorded) May 21 16:07:53.557: INFO: Container preemptor-pod ready: true, restart count 0 May 21 16:07:53.557: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test May 21 16:07:53.566: INFO: create-loop-devs-26xt8 from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:07:53.566: INFO: Container loopdev ready: true, restart count 0 May 21 16:07:53.566: INFO: kindnet-n7f64 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:07:53.566: INFO: Container kindnet-cni ready: true, restart count 0 May 21 16:07:53.566: INFO: kube-multus-ds-zr9pd from kube-system started at 2021-05-21 15:16:02 +0000 UTC (1 container statuses recorded) May 21 16:07:53.566: INFO: Container kube-multus ready: true, restart count 0 May 21 16:07:53.566: INFO: kube-proxy-87457 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:07:53.566: INFO: Container kube-proxy ready: true, restart count 0 May 21 16:07:53.566: INFO: tune-sysctls-m54ts from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:07:53.566: INFO: Container setsysctls ready: true, restart count 0 May 21 16:07:53.566: INFO: kubernetes-dashboard-9f9799597-fr9hn from kubernetes-dashboard started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:07:53.566: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 21 16:07:53.566: INFO: controller-675995489c-scdfn from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 16:07:53.566: INFO: Container controller ready: true, restart count 0 May 21 16:07:53.566: INFO: speaker-kjmdr from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 16:07:53.566: INFO: Container speaker ready: true, restart count 0 May 21 16:07:53.566: INFO: contour-6648989f79-c2th6 from projectcontour started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:07:53.566: INFO: Container contour ready: true, restart count 0 May 21 16:07:53.566: INFO: pod1-sched-preemption-medium-priority from sched-preemption-7782 started at 2021-05-21 16:07:33 +0000 UTC (1 container statuses recorded) May 21 16:07:53.566: INFO: Container pod1-sched-preemption-medium-priority ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-240121f7-4f78-4abc-8680-cfa5fb2cff79 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-240121f7-4f78-4abc-8680-cfa5fb2cff79 off the node kali-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-240121f7-4f78-4abc-8680-cfa5fb2cff79 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:08:01.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6469" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.169 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":17,"completed":2,"skipped":717,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:08:01.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:08:14.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6694" for this suite. STEP: Destroying namespace "nsdeletetest-8778" for this suite. May 21 16:08:14.806: INFO: Namespace nsdeletetest-8778 was already deleted STEP: Destroying namespace "nsdeletetest-5989" for this suite. • [SLOW TEST:13.137 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":17,"completed":3,"skipped":829,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:08:14.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 21 16:08:14.839: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 21 16:08:14.847: INFO: Waiting for terminating namespaces to be deleted... May 21 16:08:14.851: INFO: Logging pods the apiserver thinks is on node kali-worker before test May 21 16:08:14.860: INFO: create-loop-devs-8l686 from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:08:14.860: INFO: Container loopdev ready: true, restart count 0 May 21 16:08:14.860: INFO: kindnet-vlqfv from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:08:14.860: INFO: Container kindnet-cni ready: true, restart count 0 May 21 16:08:14.860: INFO: kube-multus-ds-f4mr9 from kube-system started at 2021-05-21 15:16:02 +0000 UTC (1 container statuses recorded) May 21 16:08:14.860: INFO: Container kube-multus ready: true, restart count 2 May 21 16:08:14.860: INFO: kube-proxy-ggwmf from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:08:14.860: INFO: Container kube-proxy ready: true, restart count 0 May 21 16:08:14.860: INFO: tune-sysctls-8m4jc from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:08:14.860: INFO: Container setsysctls ready: true, restart count 0 May 21 16:08:14.860: INFO: dashboard-metrics-scraper-79c5968bdc-tfgzj from kubernetes-dashboard started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:08:14.860: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 21 16:08:14.860: INFO: speaker-x7d27 from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 16:08:14.860: INFO: Container speaker ready: true, restart count 0 May 21 16:08:14.860: INFO: contour-6648989f79-6s225 from projectcontour started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:08:14.860: INFO: Container contour ready: true, restart count 0 May 21 16:08:14.860: INFO: contour-certgen-v1.15.1-7m8mh from projectcontour started at 2021-05-21 15:16:04 +0000 UTC (1 container statuses recorded) May 21 16:08:14.860: INFO: Container contour ready: false, restart count 0 May 21 16:08:14.860: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test May 21 16:08:14.869: INFO: create-loop-devs-26xt8 from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:08:14.869: INFO: Container loopdev ready: true, restart count 0 May 21 16:08:14.869: INFO: kindnet-n7f64 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:08:14.869: INFO: Container kindnet-cni ready: true, restart count 0 May 21 16:08:14.869: INFO: kube-multus-ds-zr9pd from kube-system started at 2021-05-21 15:16:02 +0000 UTC (1 container statuses recorded) May 21 16:08:14.869: INFO: Container kube-multus ready: true, restart count 0 May 21 16:08:14.869: INFO: kube-proxy-87457 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:08:14.869: INFO: Container kube-proxy ready: true, restart count 0 May 21 16:08:14.869: INFO: tune-sysctls-m54ts from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:08:14.870: INFO: Container setsysctls ready: true, restart count 0 May 21 16:08:14.870: INFO: kubernetes-dashboard-9f9799597-fr9hn from kubernetes-dashboard started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:08:14.870: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 21 16:08:14.870: INFO: controller-675995489c-scdfn from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 16:08:14.870: INFO: Container controller ready: true, restart count 0 May 21 16:08:14.870: INFO: speaker-kjmdr from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 16:08:14.870: INFO: Container speaker ready: true, restart count 0 May 21 16:08:14.870: INFO: contour-6648989f79-c2th6 from projectcontour started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:08:14.870: INFO: Container contour ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node kali-worker STEP: verifying the node has the label node kali-worker2 May 21 16:08:14.923: INFO: Pod create-loop-devs-26xt8 requesting resource cpu=0m on Node kali-worker2 May 21 16:08:14.924: INFO: Pod create-loop-devs-8l686 requesting resource cpu=0m on Node kali-worker May 21 16:08:14.924: INFO: Pod kindnet-n7f64 requesting resource cpu=100m on Node kali-worker2 May 21 16:08:14.924: INFO: Pod kindnet-vlqfv requesting resource cpu=100m on Node kali-worker May 21 16:08:14.924: INFO: Pod kube-multus-ds-f4mr9 requesting resource cpu=100m on Node kali-worker May 21 16:08:14.924: INFO: Pod kube-multus-ds-zr9pd requesting resource cpu=100m on Node kali-worker2 May 21 16:08:14.924: INFO: Pod kube-proxy-87457 requesting resource cpu=0m on Node kali-worker2 May 21 16:08:14.924: INFO: Pod kube-proxy-ggwmf requesting resource cpu=0m on Node kali-worker May 21 16:08:14.924: INFO: Pod tune-sysctls-8m4jc requesting resource cpu=0m on Node kali-worker May 21 16:08:14.924: INFO: Pod tune-sysctls-m54ts requesting resource cpu=0m on Node kali-worker2 May 21 16:08:14.924: INFO: Pod dashboard-metrics-scraper-79c5968bdc-tfgzj requesting resource cpu=0m on Node kali-worker May 21 16:08:14.924: INFO: Pod kubernetes-dashboard-9f9799597-fr9hn requesting resource cpu=0m on Node kali-worker2 May 21 16:08:14.924: INFO: Pod controller-675995489c-scdfn requesting resource cpu=0m on Node kali-worker2 May 21 16:08:14.924: INFO: Pod speaker-kjmdr requesting resource cpu=0m on Node kali-worker2 May 21 16:08:14.924: INFO: Pod speaker-x7d27 requesting resource cpu=0m on Node kali-worker May 21 16:08:14.924: INFO: Pod contour-6648989f79-6s225 requesting resource cpu=0m on Node kali-worker May 21 16:08:14.924: INFO: Pod contour-6648989f79-c2th6 requesting resource cpu=0m on Node kali-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 21 16:08:14.924: INFO: Creating a pod which consumes cpu=61460m on Node kali-worker May 21 16:08:14.931: INFO: Creating a pod which consumes cpu=61460m on Node kali-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-14f2eb4c-1d2a-45f6-9175-1aace949b87a.168120b337fb2ace], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5005/filler-pod-14f2eb4c-1d2a-45f6-9175-1aace949b87a to kali-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-14f2eb4c-1d2a-45f6-9175-1aace949b87a.168120b359c92323], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.226/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-14f2eb4c-1d2a-45f6-9175-1aace949b87a.168120b367859c52], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-14f2eb4c-1d2a-45f6-9175-1aace949b87a.168120b368cd5eff], Reason = [Created], Message = [Created container filler-pod-14f2eb4c-1d2a-45f6-9175-1aace949b87a] STEP: Considering event: Type = [Normal], Name = [filler-pod-14f2eb4c-1d2a-45f6-9175-1aace949b87a.168120b3714a0b50], Reason = [Started], Message = [Started container filler-pod-14f2eb4c-1d2a-45f6-9175-1aace949b87a] STEP: Considering event: Type = [Normal], Name = [filler-pod-e86e5e1d-24ee-4764-a1d8-f6d873c27d2a.168120b337b45b35], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5005/filler-pod-e86e5e1d-24ee-4764-a1d8-f6d873c27d2a to kali-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-e86e5e1d-24ee-4764-a1d8-f6d873c27d2a.168120b359d0a007], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.252/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-e86e5e1d-24ee-4764-a1d8-f6d873c27d2a.168120b36746d95a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e86e5e1d-24ee-4764-a1d8-f6d873c27d2a.168120b368b2981e], Reason = [Created], Message = [Created container filler-pod-e86e5e1d-24ee-4764-a1d8-f6d873c27d2a] STEP: Considering event: Type = [Normal], Name = [filler-pod-e86e5e1d-24ee-4764-a1d8-f6d873c27d2a.168120b3712d9a3b], Reason = [Started], Message = [Started container filler-pod-e86e5e1d-24ee-4764-a1d8-f6d873c27d2a] STEP: Considering event: Type = [Warning], Name = [additional-pod.168120b3b07b73c4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node kali-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node kali-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:08:17.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5005" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":17,"completed":4,"skipped":918,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:08:18.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 21 16:08:18.042: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 21 16:08:18.051: INFO: Waiting for terminating namespaces to be deleted... May 21 16:08:18.054: INFO: Logging pods the apiserver thinks is on node kali-worker before test May 21 16:08:18.063: INFO: create-loop-devs-8l686 from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:08:18.063: INFO: Container loopdev ready: true, restart count 0 May 21 16:08:18.063: INFO: kindnet-vlqfv from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:08:18.063: INFO: Container kindnet-cni ready: true, restart count 0 May 21 16:08:18.063: INFO: kube-multus-ds-f4mr9 from kube-system started at 2021-05-21 15:16:02 +0000 UTC (1 container statuses recorded) May 21 16:08:18.063: INFO: Container kube-multus ready: true, restart count 2 May 21 16:08:18.063: INFO: kube-proxy-ggwmf from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:08:18.064: INFO: Container kube-proxy ready: true, restart count 0 May 21 16:08:18.064: INFO: tune-sysctls-8m4jc from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:08:18.064: INFO: Container setsysctls ready: true, restart count 0 May 21 16:08:18.064: INFO: dashboard-metrics-scraper-79c5968bdc-tfgzj from kubernetes-dashboard started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:08:18.064: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 21 16:08:18.064: INFO: speaker-x7d27 from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 16:08:18.064: INFO: Container speaker ready: true, restart count 0 May 21 16:08:18.064: INFO: contour-6648989f79-6s225 from projectcontour started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:08:18.064: INFO: Container contour ready: true, restart count 0 May 21 16:08:18.064: INFO: contour-certgen-v1.15.1-7m8mh from projectcontour started at 2021-05-21 15:16:04 +0000 UTC (1 container statuses recorded) May 21 16:08:18.064: INFO: Container contour ready: false, restart count 0 May 21 16:08:18.064: INFO: filler-pod-e86e5e1d-24ee-4764-a1d8-f6d873c27d2a from sched-pred-5005 started at 2021-05-21 16:08:14 +0000 UTC (1 container statuses recorded) May 21 16:08:18.064: INFO: Container filler-pod-e86e5e1d-24ee-4764-a1d8-f6d873c27d2a ready: true, restart count 0 May 21 16:08:18.064: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test May 21 16:08:18.072: INFO: create-loop-devs-26xt8 from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:08:18.072: INFO: Container loopdev ready: true, restart count 0 May 21 16:08:18.072: INFO: kindnet-n7f64 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:08:18.072: INFO: Container kindnet-cni ready: true, restart count 0 May 21 16:08:18.072: INFO: kube-multus-ds-zr9pd from kube-system started at 2021-05-21 15:16:02 +0000 UTC (1 container statuses recorded) May 21 16:08:18.072: INFO: Container kube-multus ready: true, restart count 0 May 21 16:08:18.072: INFO: kube-proxy-87457 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:08:18.072: INFO: Container kube-proxy ready: true, restart count 0 May 21 16:08:18.072: INFO: tune-sysctls-m54ts from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:08:18.072: INFO: Container setsysctls ready: true, restart count 0 May 21 16:08:18.072: INFO: kubernetes-dashboard-9f9799597-fr9hn from kubernetes-dashboard started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:08:18.072: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 21 16:08:18.072: INFO: controller-675995489c-scdfn from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 16:08:18.072: INFO: Container controller ready: true, restart count 0 May 21 16:08:18.072: INFO: speaker-kjmdr from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 16:08:18.072: INFO: Container speaker ready: true, restart count 0 May 21 16:08:18.072: INFO: contour-6648989f79-c2th6 from projectcontour started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:08:18.072: INFO: Container contour ready: true, restart count 0 May 21 16:08:18.072: INFO: filler-pod-14f2eb4c-1d2a-45f6-9175-1aace949b87a from sched-pred-5005 started at 2021-05-21 16:08:14 +0000 UTC (1 container statuses recorded) May 21 16:08:18.072: INFO: Container filler-pod-14f2eb4c-1d2a-45f6-9175-1aace949b87a ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.168120b55e2870b5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:08:25.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9610" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.168 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":17,"completed":5,"skipped":1626,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:08:25.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 21 16:08:25.493: INFO: Pod name wrapped-volume-race-5543d2ea-98d2-47b1-8dc1-d9b75ca43e56: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5543d2ea-98d2-47b1-8dc1-d9b75ca43e56 in namespace emptydir-wrapper-9870, will wait for the garbage collector to delete the pods May 21 16:08:39.625: INFO: Deleting ReplicationController wrapped-volume-race-5543d2ea-98d2-47b1-8dc1-d9b75ca43e56 took: 7.97938ms May 21 16:08:40.225: INFO: Terminating ReplicationController wrapped-volume-race-5543d2ea-98d2-47b1-8dc1-d9b75ca43e56 pods took: 600.344738ms STEP: Creating RC which spawns configmap-volume pods May 21 16:08:44.044: INFO: Pod name wrapped-volume-race-1da67ff7-6fa2-4132-989e-155511657bff: Found 0 pods out of 5 May 21 16:08:49.052: INFO: Pod name wrapped-volume-race-1da67ff7-6fa2-4132-989e-155511657bff: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1da67ff7-6fa2-4132-989e-155511657bff in namespace emptydir-wrapper-9870, will wait for the garbage collector to delete the pods May 21 16:08:59.138: INFO: Deleting ReplicationController wrapped-volume-race-1da67ff7-6fa2-4132-989e-155511657bff took: 7.239371ms May 21 16:08:59.738: INFO: Terminating ReplicationController wrapped-volume-race-1da67ff7-6fa2-4132-989e-155511657bff pods took: 600.279869ms STEP: Creating RC which spawns configmap-volume pods May 21 16:09:03.859: INFO: Pod name wrapped-volume-race-8be6ce8f-023a-47e6-8d07-e94c415deaeb: Found 0 pods out of 5 May 21 16:09:08.868: INFO: Pod name wrapped-volume-race-8be6ce8f-023a-47e6-8d07-e94c415deaeb: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8be6ce8f-023a-47e6-8d07-e94c415deaeb in namespace emptydir-wrapper-9870, will wait for the garbage collector to delete the pods May 21 16:09:18.955: INFO: Deleting ReplicationController wrapped-volume-race-8be6ce8f-023a-47e6-8d07-e94c415deaeb took: 7.791271ms May 21 16:09:19.055: INFO: Terminating ReplicationController wrapped-volume-race-8be6ce8f-023a-47e6-8d07-e94c415deaeb pods took: 100.278997ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:09:30.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9870" for this suite. • [SLOW TEST:65.653 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":17,"completed":6,"skipped":1696,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:09:30.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:09:30.893: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 21 16:09:30.901: INFO: Number of nodes with available pods: 0 May 21 16:09:30.901: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 21 16:09:30.918: INFO: Number of nodes with available pods: 0 May 21 16:09:30.918: INFO: Node kali-worker is running more than one daemon pod May 21 16:09:31.922: INFO: Number of nodes with available pods: 0 May 21 16:09:31.922: INFO: Node kali-worker is running more than one daemon pod May 21 16:09:32.923: INFO: Number of nodes with available pods: 1 May 21 16:09:32.923: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 21 16:09:32.940: INFO: Number of nodes with available pods: 1 May 21 16:09:32.940: INFO: Number of running nodes: 0, number of available pods: 1 May 21 16:09:33.945: INFO: Number of nodes with available pods: 0 May 21 16:09:33.945: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 21 16:09:33.956: INFO: Number of nodes with available pods: 0 May 21 16:09:33.956: INFO: Node kali-worker is running more than one daemon pod May 21 16:09:34.960: INFO: Number of nodes with available pods: 0 May 21 16:09:34.960: INFO: Node kali-worker is running more than one daemon pod May 21 16:09:35.962: INFO: Number of nodes with available pods: 0 May 21 16:09:35.962: INFO: Node kali-worker is running more than one daemon pod May 21 16:09:36.961: INFO: Number of nodes with available pods: 0 May 21 16:09:36.961: INFO: Node kali-worker is running more than one daemon pod May 21 16:09:37.961: INFO: Number of nodes with available pods: 0 May 21 16:09:37.961: INFO: Node kali-worker is running more than one daemon pod May 21 16:09:38.961: INFO: Number of nodes with available pods: 0 May 21 16:09:38.961: INFO: Node kali-worker is running more than one daemon pod May 21 16:09:39.962: INFO: Number of nodes with available pods: 0 May 21 16:09:39.962: INFO: Node kali-worker is running more than one daemon pod May 21 16:09:40.960: INFO: Number of nodes with available pods: 0 May 21 16:09:40.960: INFO: Node kali-worker is running more than one daemon pod May 21 16:09:41.960: INFO: Number of nodes with available pods: 0 May 21 16:09:41.960: INFO: Node kali-worker is running more than one daemon pod May 21 16:09:42.960: INFO: Number of nodes with available pods: 1 May 21 16:09:42.960: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5822, will wait for the garbage collector to delete the pods May 21 16:09:43.027: INFO: Deleting DaemonSet.extensions daemon-set took: 6.344789ms May 21 16:09:43.627: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.123017ms May 21 16:09:50.229: INFO: Number of nodes with available pods: 0 May 21 16:09:50.229: INFO: Number of running nodes: 0, number of available pods: 0 May 21 16:09:50.234: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5822/daemonsets","resourceVersion":"31933"},"items":null} May 21 16:09:50.237: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5822/pods","resourceVersion":"31933"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:09:50.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5822" for this suite. • [SLOW TEST:19.413 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":17,"completed":7,"skipped":2285,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:09:50.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:09:50.310: INFO: Create a RollingUpdate DaemonSet May 21 16:09:50.314: INFO: Check that daemon pods launch on every node of the cluster May 21 16:09:50.318: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:09:50.321: INFO: Number of nodes with available pods: 0 May 21 16:09:50.321: INFO: Node kali-worker is running more than one daemon pod May 21 16:09:51.326: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:09:51.330: INFO: Number of nodes with available pods: 0 May 21 16:09:51.330: INFO: Node kali-worker is running more than one daemon pod May 21 16:09:52.327: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:09:52.330: INFO: Number of nodes with available pods: 2 May 21 16:09:52.330: INFO: Number of running nodes: 2, number of available pods: 2 May 21 16:09:52.330: INFO: Update the DaemonSet to trigger a rollout May 21 16:09:52.339: INFO: Updating DaemonSet daemon-set May 21 16:10:00.356: INFO: Roll back the DaemonSet before rollout is complete May 21 16:10:00.366: INFO: Updating DaemonSet daemon-set May 21 16:10:00.366: INFO: Make sure DaemonSet rollback is complete May 21 16:10:00.370: INFO: Wrong image for pod: daemon-set-srwks. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 21 16:10:00.370: INFO: Pod daemon-set-srwks is not available May 21 16:10:00.374: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:10:01.379: INFO: Wrong image for pod: daemon-set-srwks. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 21 16:10:01.379: INFO: Pod daemon-set-srwks is not available May 21 16:10:01.384: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:10:02.379: INFO: Wrong image for pod: daemon-set-srwks. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 21 16:10:02.379: INFO: Pod daemon-set-srwks is not available May 21 16:10:02.383: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:10:03.379: INFO: Wrong image for pod: daemon-set-srwks. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 21 16:10:03.379: INFO: Pod daemon-set-srwks is not available May 21 16:10:03.384: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:10:04.379: INFO: Wrong image for pod: daemon-set-srwks. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 21 16:10:04.379: INFO: Pod daemon-set-srwks is not available May 21 16:10:04.384: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:10:05.379: INFO: Wrong image for pod: daemon-set-srwks. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 21 16:10:05.379: INFO: Pod daemon-set-srwks is not available May 21 16:10:05.383: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:10:06.379: INFO: Wrong image for pod: daemon-set-srwks. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 21 16:10:06.379: INFO: Pod daemon-set-srwks is not available May 21 16:10:06.383: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:10:07.379: INFO: Wrong image for pod: daemon-set-srwks. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 21 16:10:07.379: INFO: Pod daemon-set-srwks is not available May 21 16:10:07.384: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:10:08.379: INFO: Wrong image for pod: daemon-set-srwks. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 21 16:10:08.379: INFO: Pod daemon-set-srwks is not available May 21 16:10:08.384: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:10:09.379: INFO: Wrong image for pod: daemon-set-srwks. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 21 16:10:09.379: INFO: Pod daemon-set-srwks is not available May 21 16:10:09.384: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:10:10.379: INFO: Pod daemon-set-7kt9j is not available May 21 16:10:10.383: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7708, will wait for the garbage collector to delete the pods May 21 16:10:10.450: INFO: Deleting DaemonSet.extensions daemon-set took: 6.389231ms May 21 16:10:11.050: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.333813ms May 21 16:10:13.253: INFO: Number of nodes with available pods: 0 May 21 16:10:13.253: INFO: Number of running nodes: 0, number of available pods: 0 May 21 16:10:13.256: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7708/daemonsets","resourceVersion":"32115"},"items":null} May 21 16:10:13.259: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7708/pods","resourceVersion":"32115"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:10:13.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7708" for this suite. • [SLOW TEST:23.019 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":17,"completed":8,"skipped":2358,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:10:13.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:10:19.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1242" for this suite. STEP: Destroying namespace "nsdeletetest-5489" for this suite. May 21 16:10:19.398: INFO: Namespace nsdeletetest-5489 was already deleted STEP: Destroying namespace "nsdeletetest-7217" for this suite. • [SLOW TEST:6.115 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":17,"completed":9,"skipped":2868,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:10:19.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 21 16:10:19.432: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 21 16:10:19.440: INFO: Waiting for terminating namespaces to be deleted... May 21 16:10:19.443: INFO: Logging pods the apiserver thinks is on node kali-worker before test May 21 16:10:19.452: INFO: create-loop-devs-8l686 from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:10:19.452: INFO: Container loopdev ready: true, restart count 0 May 21 16:10:19.452: INFO: kindnet-vlqfv from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:10:19.452: INFO: Container kindnet-cni ready: true, restart count 0 May 21 16:10:19.452: INFO: kube-multus-ds-f4mr9 from kube-system started at 2021-05-21 15:16:02 +0000 UTC (1 container statuses recorded) May 21 16:10:19.452: INFO: Container kube-multus ready: true, restart count 2 May 21 16:10:19.452: INFO: kube-proxy-ggwmf from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:10:19.452: INFO: Container kube-proxy ready: true, restart count 0 May 21 16:10:19.452: INFO: tune-sysctls-8m4jc from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:10:19.452: INFO: Container setsysctls ready: true, restart count 0 May 21 16:10:19.452: INFO: dashboard-metrics-scraper-79c5968bdc-tfgzj from kubernetes-dashboard started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:10:19.452: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 21 16:10:19.452: INFO: speaker-x7d27 from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 16:10:19.452: INFO: Container speaker ready: true, restart count 0 May 21 16:10:19.452: INFO: contour-6648989f79-6s225 from projectcontour started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:10:19.452: INFO: Container contour ready: true, restart count 0 May 21 16:10:19.452: INFO: contour-certgen-v1.15.1-7m8mh from projectcontour started at 2021-05-21 15:16:04 +0000 UTC (1 container statuses recorded) May 21 16:10:19.452: INFO: Container contour ready: false, restart count 0 May 21 16:10:19.452: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test May 21 16:10:19.460: INFO: create-loop-devs-26xt8 from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:10:19.460: INFO: Container loopdev ready: true, restart count 0 May 21 16:10:19.460: INFO: kindnet-n7f64 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:10:19.460: INFO: Container kindnet-cni ready: true, restart count 0 May 21 16:10:19.460: INFO: kube-multus-ds-zr9pd from kube-system started at 2021-05-21 15:16:02 +0000 UTC (1 container statuses recorded) May 21 16:10:19.460: INFO: Container kube-multus ready: true, restart count 0 May 21 16:10:19.460: INFO: kube-proxy-87457 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:10:19.460: INFO: Container kube-proxy ready: true, restart count 0 May 21 16:10:19.460: INFO: tune-sysctls-m54ts from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:10:19.461: INFO: Container setsysctls ready: true, restart count 0 May 21 16:10:19.461: INFO: kubernetes-dashboard-9f9799597-fr9hn from kubernetes-dashboard started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:10:19.461: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 21 16:10:19.461: INFO: controller-675995489c-scdfn from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 16:10:19.461: INFO: Container controller ready: true, restart count 0 May 21 16:10:19.461: INFO: speaker-kjmdr from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 16:10:19.461: INFO: Container speaker ready: true, restart count 0 May 21 16:10:19.461: INFO: contour-6648989f79-c2th6 from projectcontour started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:10:19.461: INFO: Container contour ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-6b89cc24-0d2a-418c-a134-ccb7ec5b8367 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-6b89cc24-0d2a-418c-a134-ccb7ec5b8367 off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-6b89cc24-0d2a-418c-a134-ccb7ec5b8367 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:10:23.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4049" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":17,"completed":10,"skipped":2899,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:10:23.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 May 21 16:10:23.590: INFO: Waiting up to 1m0s for all nodes to be ready May 21 16:11:23.637: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. May 21 16:11:23.663: INFO: Created pod: pod0-sched-preemption-low-priority May 21 16:11:23.683: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:11:41.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9369" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:78.234 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":17,"completed":11,"skipped":4240,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:11:41.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:11:41.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6112" for this suite. STEP: Destroying namespace "nspatchtest-646a53b1-fe88-4261-8659-175cde7206b9-5471" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":17,"completed":12,"skipped":4465,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:11:41.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 21 16:11:41.924: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:11:41.927: INFO: Number of nodes with available pods: 0 May 21 16:11:41.927: INFO: Node kali-worker is running more than one daemon pod May 21 16:11:42.933: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:11:42.937: INFO: Number of nodes with available pods: 0 May 21 16:11:42.937: INFO: Node kali-worker is running more than one daemon pod May 21 16:11:43.932: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:11:43.936: INFO: Number of nodes with available pods: 2 May 21 16:11:43.936: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 21 16:11:43.955: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:11:43.958: INFO: Number of nodes with available pods: 1 May 21 16:11:43.958: INFO: Node kali-worker2 is running more than one daemon pod May 21 16:11:44.964: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:11:44.968: INFO: Number of nodes with available pods: 1 May 21 16:11:44.968: INFO: Node kali-worker2 is running more than one daemon pod May 21 16:11:45.964: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:11:45.968: INFO: Number of nodes with available pods: 2 May 21 16:11:45.968: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6017, will wait for the garbage collector to delete the pods May 21 16:11:46.034: INFO: Deleting DaemonSet.extensions daemon-set took: 6.461572ms May 21 16:11:46.634: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.223275ms May 21 16:11:49.338: INFO: Number of nodes with available pods: 0 May 21 16:11:49.338: INFO: Number of running nodes: 0, number of available pods: 0 May 21 16:11:49.342: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6017/daemonsets","resourceVersion":"32716"},"items":null} May 21 16:11:49.345: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6017/pods","resourceVersion":"32716"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:11:49.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6017" for this suite. • [SLOW TEST:7.499 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":17,"completed":13,"skipped":4627,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:11:49.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 21 16:11:49.404: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 21 16:11:49.412: INFO: Waiting for terminating namespaces to be deleted... May 21 16:11:49.417: INFO: Logging pods the apiserver thinks is on node kali-worker before test May 21 16:11:49.428: INFO: create-loop-devs-8l686 from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:11:49.428: INFO: Container loopdev ready: true, restart count 0 May 21 16:11:49.428: INFO: kindnet-vlqfv from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:11:49.428: INFO: Container kindnet-cni ready: true, restart count 0 May 21 16:11:49.428: INFO: kube-multus-ds-f4mr9 from kube-system started at 2021-05-21 15:16:02 +0000 UTC (1 container statuses recorded) May 21 16:11:49.428: INFO: Container kube-multus ready: true, restart count 2 May 21 16:11:49.428: INFO: kube-proxy-ggwmf from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:11:49.428: INFO: Container kube-proxy ready: true, restart count 0 May 21 16:11:49.428: INFO: tune-sysctls-8m4jc from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:11:49.428: INFO: Container setsysctls ready: true, restart count 0 May 21 16:11:49.428: INFO: dashboard-metrics-scraper-79c5968bdc-tfgzj from kubernetes-dashboard started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:11:49.428: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 21 16:11:49.428: INFO: speaker-x7d27 from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 16:11:49.428: INFO: Container speaker ready: true, restart count 0 May 21 16:11:49.428: INFO: contour-6648989f79-6s225 from projectcontour started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:11:49.428: INFO: Container contour ready: true, restart count 0 May 21 16:11:49.428: INFO: contour-certgen-v1.15.1-7m8mh from projectcontour started at 2021-05-21 15:16:04 +0000 UTC (1 container statuses recorded) May 21 16:11:49.428: INFO: Container contour ready: false, restart count 0 May 21 16:11:49.428: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test May 21 16:11:49.437: INFO: create-loop-devs-26xt8 from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:11:49.437: INFO: Container loopdev ready: true, restart count 0 May 21 16:11:49.437: INFO: kindnet-n7f64 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:11:49.437: INFO: Container kindnet-cni ready: true, restart count 0 May 21 16:11:49.437: INFO: kube-multus-ds-zr9pd from kube-system started at 2021-05-21 15:16:02 +0000 UTC (1 container statuses recorded) May 21 16:11:49.437: INFO: Container kube-multus ready: true, restart count 0 May 21 16:11:49.437: INFO: kube-proxy-87457 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:11:49.437: INFO: Container kube-proxy ready: true, restart count 0 May 21 16:11:49.437: INFO: tune-sysctls-m54ts from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:11:49.437: INFO: Container setsysctls ready: true, restart count 0 May 21 16:11:49.437: INFO: kubernetes-dashboard-9f9799597-fr9hn from kubernetes-dashboard started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:11:49.437: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 21 16:11:49.437: INFO: controller-675995489c-scdfn from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 16:11:49.437: INFO: Container controller ready: true, restart count 0 May 21 16:11:49.437: INFO: speaker-kjmdr from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 16:11:49.437: INFO: Container speaker ready: true, restart count 0 May 21 16:11:49.437: INFO: contour-6648989f79-c2th6 from projectcontour started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:11:49.437: INFO: Container contour ready: true, restart count 0 May 21 16:11:49.437: INFO: pod1-sched-preemption-medium-priority from sched-preemption-9369 started at 2021-05-21 16:11:23 +0000 UTC (1 container statuses recorded) May 21 16:11:49.437: INFO: Container pod1-sched-preemption-medium-priority ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e1622ca4-f4e6-4d21-b2ae-ab530f4163d6 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-e1622ca4-f4e6-4d21-b2ae-ab530f4163d6 off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-e1622ca4-f4e6-4d21-b2ae-ab530f4163d6 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:16:53.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9957" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:304.155 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":17,"completed":14,"skipped":5062,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:16:53.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:16:53.579: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 21 16:16:53.588: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:16:53.591: INFO: Number of nodes with available pods: 0 May 21 16:16:53.591: INFO: Node kali-worker is running more than one daemon pod May 21 16:16:54.596: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:16:54.599: INFO: Number of nodes with available pods: 0 May 21 16:16:54.599: INFO: Node kali-worker is running more than one daemon pod May 21 16:16:55.596: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:16:55.600: INFO: Number of nodes with available pods: 2 May 21 16:16:55.600: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 21 16:16:55.628: INFO: Wrong image for pod: daemon-set-57slb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 21 16:16:55.628: INFO: Wrong image for pod: daemon-set-gfb44. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 21 16:16:55.632: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:16:56.636: INFO: Wrong image for pod: daemon-set-57slb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 21 16:16:56.636: INFO: Wrong image for pod: daemon-set-gfb44. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 21 16:16:56.640: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:16:57.637: INFO: Wrong image for pod: daemon-set-57slb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 21 16:16:57.637: INFO: Wrong image for pod: daemon-set-gfb44. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 21 16:16:57.641: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:16:58.637: INFO: Wrong image for pod: daemon-set-57slb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 21 16:16:58.637: INFO: Wrong image for pod: daemon-set-gfb44. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 21 16:16:58.637: INFO: Pod daemon-set-gfb44 is not available May 21 16:16:58.641: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:16:59.636: INFO: Wrong image for pod: daemon-set-57slb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 21 16:16:59.636: INFO: Pod daemon-set-jwkdz is not available May 21 16:16:59.641: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:17:00.636: INFO: Wrong image for pod: daemon-set-57slb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 21 16:17:00.640: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:17:01.636: INFO: Wrong image for pod: daemon-set-57slb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 21 16:17:01.636: INFO: Pod daemon-set-57slb is not available May 21 16:17:01.640: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:17:02.636: INFO: Wrong image for pod: daemon-set-57slb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 21 16:17:02.636: INFO: Pod daemon-set-57slb is not available May 21 16:17:02.640: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:17:03.638: INFO: Wrong image for pod: daemon-set-57slb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 21 16:17:03.638: INFO: Pod daemon-set-57slb is not available May 21 16:17:03.642: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:17:04.638: INFO: Wrong image for pod: daemon-set-57slb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 21 16:17:04.638: INFO: Pod daemon-set-57slb is not available May 21 16:17:04.643: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:17:05.636: INFO: Wrong image for pod: daemon-set-57slb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 21 16:17:05.636: INFO: Pod daemon-set-57slb is not available May 21 16:17:05.640: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:17:06.637: INFO: Wrong image for pod: daemon-set-57slb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 21 16:17:06.637: INFO: Pod daemon-set-57slb is not available May 21 16:17:06.642: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:17:07.639: INFO: Wrong image for pod: daemon-set-57slb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 21 16:17:07.639: INFO: Pod daemon-set-57slb is not available May 21 16:17:07.643: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:17:08.638: INFO: Wrong image for pod: daemon-set-57slb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 21 16:17:08.638: INFO: Pod daemon-set-57slb is not available May 21 16:17:08.643: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:17:09.638: INFO: Wrong image for pod: daemon-set-57slb. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. May 21 16:17:09.638: INFO: Pod daemon-set-57slb is not available May 21 16:17:09.643: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:17:10.637: INFO: Pod daemon-set-jhxrx is not available May 21 16:17:10.641: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 21 16:17:10.646: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:17:10.649: INFO: Number of nodes with available pods: 1 May 21 16:17:10.650: INFO: Node kali-worker2 is running more than one daemon pod May 21 16:17:11.655: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:17:11.659: INFO: Number of nodes with available pods: 1 May 21 16:17:11.659: INFO: Node kali-worker2 is running more than one daemon pod May 21 16:17:12.655: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:17:12.659: INFO: Number of nodes with available pods: 2 May 21 16:17:12.659: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7934, will wait for the garbage collector to delete the pods May 21 16:17:12.738: INFO: Deleting DaemonSet.extensions daemon-set took: 7.627897ms May 21 16:17:13.338: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.275534ms May 21 16:17:20.441: INFO: Number of nodes with available pods: 0 May 21 16:17:20.441: INFO: Number of running nodes: 0, number of available pods: 0 May 21 16:17:20.444: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7934/daemonsets","resourceVersion":"34025"},"items":null} May 21 16:17:20.447: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7934/pods","resourceVersion":"34025"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:17:20.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7934" for this suite. • [SLOW TEST:26.936 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":17,"completed":15,"skipped":5209,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:17:20.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 May 21 16:17:20.511: INFO: Waiting up to 1m0s for all nodes to be ready May 21 16:18:20.559: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:18:20.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 21 16:18:22.622: INFO: found a healthy node: kali-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 16:18:30.687: INFO: pods created so far: [1 1 1] May 21 16:18:30.687: INFO: length of pods created so far: 3 May 21 16:18:34.697: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:18:41.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-1776" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:18:41.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9161" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:81.323 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":17,"completed":16,"skipped":5277,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:18:41.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 21 16:18:41.852: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:18:41.855: INFO: Number of nodes with available pods: 0 May 21 16:18:41.855: INFO: Node kali-worker is running more than one daemon pod May 21 16:18:42.861: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:18:42.865: INFO: Number of nodes with available pods: 0 May 21 16:18:42.865: INFO: Node kali-worker is running more than one daemon pod May 21 16:18:43.860: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:18:43.864: INFO: Number of nodes with available pods: 2 May 21 16:18:43.864: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 21 16:18:43.881: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:18:43.884: INFO: Number of nodes with available pods: 1 May 21 16:18:43.884: INFO: Node kali-worker2 is running more than one daemon pod May 21 16:18:44.889: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:18:44.893: INFO: Number of nodes with available pods: 1 May 21 16:18:44.893: INFO: Node kali-worker2 is running more than one daemon pod May 21 16:18:45.890: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:18:45.894: INFO: Number of nodes with available pods: 1 May 21 16:18:45.894: INFO: Node kali-worker2 is running more than one daemon pod May 21 16:18:46.888: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:18:46.891: INFO: Number of nodes with available pods: 1 May 21 16:18:46.891: INFO: Node kali-worker2 is running more than one daemon pod May 21 16:18:47.889: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:18:47.892: INFO: Number of nodes with available pods: 1 May 21 16:18:47.893: INFO: Node kali-worker2 is running more than one daemon pod May 21 16:18:48.889: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 16:18:48.892: INFO: Number of nodes with available pods: 2 May 21 16:18:48.892: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4111, will wait for the garbage collector to delete the pods May 21 16:18:48.956: INFO: Deleting DaemonSet.extensions daemon-set took: 6.357114ms May 21 16:18:49.056: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.298045ms May 21 16:19:00.460: INFO: Number of nodes with available pods: 0 May 21 16:19:00.460: INFO: Number of running nodes: 0, number of available pods: 0 May 21 16:19:00.463: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4111/daemonsets","resourceVersion":"34653"},"items":null} May 21 16:19:00.465: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4111/pods","resourceVersion":"34653"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:19:00.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4111" for this suite. • [SLOW TEST:18.691 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":17,"completed":17,"skipped":5416,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 21 16:19:00.486: INFO: Running AfterSuite actions on all nodes May 21 16:19:00.486: INFO: Running AfterSuite actions on node 1 May 21 16:19:00.486: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":17,"skipped":5467,"failed":0} Ran 17 of 5484 Specs in 753.320 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5467 Skipped PASS Ginkgo ran 1 suite in 12m35.023548895s Test Suite Passed