I0603 11:40:13.577246 17 e2e.go:129] Starting e2e run "4de3f698-ee6f-4aab-b9d0-45374f9b875c" on Ginkgo node 1 {"msg":"Test Suite starting","total":17,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1622720412 - Will randomize all specs Will run 17 of 5771 specs Jun 3 11:40:13.676: INFO: >>> kubeConfig: /root/.kube/config Jun 3 11:40:13.680: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 3 11:40:13.707: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 3 11:40:13.759: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 3 11:40:13.759: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 3 11:40:13.759: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 3 11:40:13.772: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Jun 3 11:40:13.772: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 3 11:40:13.772: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) Jun 3 11:40:13.772: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 3 11:40:13.772: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) Jun 3 11:40:13.772: INFO: e2e test version: v1.21.1 Jun 3 11:40:13.774: INFO: kube-apiserver version: v1.21.1 Jun 3 11:40:13.774: INFO: >>> kubeConfig: /root/.kube/config Jun 3 11:40:13.779: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 11:40:13.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets W0603 11:40:13.814778 17 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 11:40:13.814: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 11:40:13.824: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 11:40:13.846: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 3 11:40:13.853: INFO: Number of nodes with available pods: 0 Jun 3 11:40:13.853: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 3 11:40:13.870: INFO: Number of nodes with available pods: 0 Jun 3 11:40:13.870: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:40:14.875: INFO: Number of nodes with available pods: 0 Jun 3 11:40:14.875: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:40:15.875: INFO: Number of nodes with available pods: 1 Jun 3 11:40:15.875: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 3 11:40:15.894: INFO: Number of nodes with available pods: 1 Jun 3 11:40:15.894: INFO: Number of running nodes: 0, number of available pods: 1 Jun 3 11:40:16.902: INFO: Number of nodes with available pods: 0 Jun 3 11:40:16.902: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 3 11:40:16.915: INFO: Number of nodes with available pods: 0 Jun 3 11:40:16.915: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:40:17.919: INFO: Number of nodes with available pods: 0 Jun 3 11:40:17.919: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:40:18.920: INFO: Number of nodes with available pods: 0 Jun 3 11:40:18.920: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:40:19.981: INFO: Number of nodes with available pods: 0 Jun 3 11:40:19.982: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:40:20.919: INFO: Number of nodes with available pods: 0 Jun 3 11:40:20.920: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:40:21.919: INFO: Number of nodes with available pods: 0 Jun 3 11:40:21.919: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:40:22.920: INFO: Number of nodes with available pods: 0 Jun 3 11:40:22.920: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:40:23.920: INFO: Number of nodes with available pods: 0 Jun 3 11:40:23.920: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:40:24.920: INFO: Number of nodes with available pods: 0 Jun 3 11:40:24.920: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:40:25.920: INFO: Number of nodes with available pods: 0 Jun 3 11:40:25.920: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:40:26.920: INFO: Number of nodes with available pods: 0 Jun 3 11:40:26.920: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:40:27.920: INFO: Number of nodes with available pods: 1 Jun 3 11:40:27.920: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6465, will wait for the garbage collector to delete the pods Jun 3 11:40:27.987: INFO: Deleting DaemonSet.extensions daemon-set took: 5.151078ms Jun 3 11:40:28.087: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.402922ms Jun 3 11:40:35.491: INFO: Number of nodes with available pods: 0 Jun 3 11:40:35.491: INFO: Number of running nodes: 0, number of available pods: 0 Jun 3 11:40:35.499: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"3303643"},"items":null} Jun 3 11:40:35.502: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"3303643"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 11:40:35.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6465" for this suite. • [SLOW TEST:21.753 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":17,"completed":1,"skipped":52,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 11:40:35.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 11:40:35.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1349" for this suite. STEP: Destroying namespace "nspatchtest-a190e35f-6d39-4d06-9879-9b0268762c23-9115" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":17,"completed":2,"skipped":330,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 11:40:35.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 3 11:40:35.650: INFO: Waiting up to 1m0s for all nodes to be ready Jun 3 11:41:35.709: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Jun 3 11:41:35.737: INFO: Created pod: pod0-sched-preemption-low-priority Jun 3 11:41:35.755: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 11:41:57.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-356" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:82.231 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":17,"completed":3,"skipped":644,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 11:41:57.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 3 11:41:57.886: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 11:41:57.895: INFO: Waiting for terminating namespaces to be deleted... Jun 3 11:41:57.898: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test Jun 3 11:41:57.906: INFO: chaos-daemon-vxnd4 from default started at 2021-06-01 18:06:16 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.906: INFO: Container chaos-daemon ready: true, restart count 0 Jun 3 11:41:57.907: INFO: create-loop-devs-2zz2t from kube-system started at 2021-06-01 18:06:45 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.907: INFO: Container loopdev ready: true, restart count 0 Jun 3 11:41:57.907: INFO: kindnet-64qsq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.907: INFO: Container kindnet-cni ready: true, restart count 6 Jun 3 11:41:57.907: INFO: kube-multus-ds-vvcq9 from kube-system started at 2021-06-01 18:06:35 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.907: INFO: Container kube-multus ready: true, restart count 0 Jun 3 11:41:57.907: INFO: kube-proxy-pjm2c from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.907: INFO: Container kube-proxy ready: true, restart count 0 Jun 3 11:41:57.907: INFO: tune-sysctls-dkbjj from kube-system started at 2021-06-01 18:06:16 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.907: INFO: Container setsysctls ready: true, restart count 0 Jun 3 11:41:57.907: INFO: speaker-c7g2h from metallb-system started at 2021-06-01 18:06:16 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.907: INFO: Container speaker ready: true, restart count 0 Jun 3 11:41:57.907: INFO: preemptor-pod from sched-preemption-356 started at 2021-06-03 11:41:55 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.907: INFO: Container preemptor-pod ready: true, restart count 0 Jun 3 11:41:57.907: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test Jun 3 11:41:57.916: INFO: chaos-controller-manager-69c479c674-6l597 from default started at 2021-05-25 17:38:09 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.916: INFO: Container chaos-mesh ready: true, restart count 0 Jun 3 11:41:57.916: INFO: chaos-daemon-zspvx from default started at 2021-05-25 17:38:09 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.916: INFO: Container chaos-daemon ready: true, restart count 0 Jun 3 11:41:57.916: INFO: dockerd from default started at 2021-05-25 17:35:22 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.916: INFO: Container dockerd ready: true, restart count 0 Jun 3 11:41:57.916: INFO: create-loop-devs-lfj6m from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.916: INFO: Container loopdev ready: true, restart count 0 Jun 3 11:41:57.916: INFO: kindnet-5xbgn from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.916: INFO: Container kindnet-cni ready: true, restart count 7 Jun 3 11:41:57.916: INFO: kube-multus-ds-chmxd from kube-system started at 2021-05-24 17:25:29 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.916: INFO: Container kube-multus ready: true, restart count 1 Jun 3 11:41:57.916: INFO: kube-proxy-wg4wq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.916: INFO: Container kube-proxy ready: true, restart count 0 Jun 3 11:41:57.916: INFO: tune-sysctls-b7rgm from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.916: INFO: Container setsysctls ready: true, restart count 0 Jun 3 11:41:57.916: INFO: dashboard-metrics-scraper-856586f554-l66m5 from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.916: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 Jun 3 11:41:57.916: INFO: kubernetes-dashboard-78c79f97b4-k777m from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.916: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jun 3 11:41:57.916: INFO: chaos-operator-ce-5754fd4b69-zxmdb from litmus started at 2021-05-25 19:03:14 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.916: INFO: Container chaos-operator ready: true, restart count 0 Jun 3 11:41:57.916: INFO: controller-675995489c-x7gj2 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.916: INFO: Container controller ready: true, restart count 0 Jun 3 11:41:57.916: INFO: speaker-lw6f6 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.916: INFO: Container speaker ready: true, restart count 0 Jun 3 11:41:57.916: INFO: contour-74948c9879-n2262 from projectcontour started at 2021-05-24 17:25:31 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.916: INFO: Container contour ready: true, restart count 0 Jun 3 11:41:57.916: INFO: contour-74948c9879-w22pr from projectcontour started at 2021-05-24 19:58:28 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.916: INFO: Container contour ready: true, restart count 0 Jun 3 11:41:57.916: INFO: pod1-sched-preemption-medium-priority from sched-preemption-356 started at 2021-06-03 11:41:47 +0000 UTC (1 container statuses recorded) Jun 3 11:41:57.916: INFO: Container pod1-sched-preemption-medium-priority ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: verifying the node has the label node v1.21-worker STEP: verifying the node has the label node v1.21-worker2 Jun 3 11:41:57.982: INFO: Pod chaos-controller-manager-69c479c674-6l597 requesting resource cpu=25m on Node v1.21-worker2 Jun 3 11:41:57.982: INFO: Pod chaos-daemon-vxnd4 requesting resource cpu=0m on Node v1.21-worker Jun 3 11:41:57.982: INFO: Pod chaos-daemon-zspvx requesting resource cpu=0m on Node v1.21-worker2 Jun 3 11:41:57.982: INFO: Pod dockerd requesting resource cpu=0m on Node v1.21-worker2 Jun 3 11:41:57.982: INFO: Pod create-loop-devs-2zz2t requesting resource cpu=0m on Node v1.21-worker Jun 3 11:41:57.982: INFO: Pod create-loop-devs-lfj6m requesting resource cpu=0m on Node v1.21-worker2 Jun 3 11:41:57.982: INFO: Pod kindnet-5xbgn requesting resource cpu=100m on Node v1.21-worker2 Jun 3 11:41:57.982: INFO: Pod kindnet-64qsq requesting resource cpu=100m on Node v1.21-worker Jun 3 11:41:57.983: INFO: Pod kube-multus-ds-chmxd requesting resource cpu=100m on Node v1.21-worker2 Jun 3 11:41:57.983: INFO: Pod kube-multus-ds-vvcq9 requesting resource cpu=100m on Node v1.21-worker Jun 3 11:41:57.983: INFO: Pod kube-proxy-pjm2c requesting resource cpu=0m on Node v1.21-worker Jun 3 11:41:57.983: INFO: Pod kube-proxy-wg4wq requesting resource cpu=0m on Node v1.21-worker2 Jun 3 11:41:57.983: INFO: Pod tune-sysctls-b7rgm requesting resource cpu=0m on Node v1.21-worker2 Jun 3 11:41:57.983: INFO: Pod tune-sysctls-dkbjj requesting resource cpu=0m on Node v1.21-worker Jun 3 11:41:57.983: INFO: Pod dashboard-metrics-scraper-856586f554-l66m5 requesting resource cpu=0m on Node v1.21-worker2 Jun 3 11:41:57.983: INFO: Pod kubernetes-dashboard-78c79f97b4-k777m requesting resource cpu=0m on Node v1.21-worker2 Jun 3 11:41:57.983: INFO: Pod chaos-operator-ce-5754fd4b69-zxmdb requesting resource cpu=0m on Node v1.21-worker2 Jun 3 11:41:57.983: INFO: Pod controller-675995489c-x7gj2 requesting resource cpu=0m on Node v1.21-worker2 Jun 3 11:41:57.983: INFO: Pod speaker-c7g2h requesting resource cpu=0m on Node v1.21-worker Jun 3 11:41:57.983: INFO: Pod speaker-lw6f6 requesting resource cpu=0m on Node v1.21-worker2 Jun 3 11:41:57.983: INFO: Pod contour-74948c9879-n2262 requesting resource cpu=0m on Node v1.21-worker2 Jun 3 11:41:57.983: INFO: Pod contour-74948c9879-w22pr requesting resource cpu=0m on Node v1.21-worker2 Jun 3 11:41:57.983: INFO: Pod pod1-sched-preemption-medium-priority requesting resource cpu=0m on Node v1.21-worker2 Jun 3 11:41:57.983: INFO: Pod preemptor-pod requesting resource cpu=0m on Node v1.21-worker STEP: Starting Pods to consume most of the cluster CPU. Jun 3 11:41:57.983: INFO: Creating a pod which consumes cpu=61460m on Node v1.21-worker Jun 3 11:41:57.990: INFO: Creating a pod which consumes cpu=61442m on Node v1.21-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-17b430d5-7552-4454-8292-9cf14631f161.16850fb6accdf370], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6386/filler-pod-17b430d5-7552-4454-8292-9cf14631f161 to v1.21-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-17b430d5-7552-4454-8292-9cf14631f161.16850fb6cb8789d9], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.175/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-17b430d5-7552-4454-8292-9cf14631f161.16850fb6d71a9304], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-17b430d5-7552-4454-8292-9cf14631f161.16850fb6d87c6d9d], Reason = [Created], Message = [Created container filler-pod-17b430d5-7552-4454-8292-9cf14631f161] STEP: Considering event: Type = [Normal], Name = [filler-pod-17b430d5-7552-4454-8292-9cf14631f161.16850fb6e18e0611], Reason = [Started], Message = [Started container filler-pod-17b430d5-7552-4454-8292-9cf14631f161] STEP: Considering event: Type = [Normal], Name = [filler-pod-aa4d1dca-c2d7-4803-a248-7cb5bacb35d1.16850fb6ad6be021], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6386/filler-pod-aa4d1dca-c2d7-4803-a248-7cb5bacb35d1 to v1.21-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-aa4d1dca-c2d7-4803-a248-7cb5bacb35d1.16850fb6cb79d94c], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.99/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-aa4d1dca-c2d7-4803-a248-7cb5bacb35d1.16850fb6d77c5ecb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-aa4d1dca-c2d7-4803-a248-7cb5bacb35d1.16850fb6d8ad77f7], Reason = [Created], Message = [Created container filler-pod-aa4d1dca-c2d7-4803-a248-7cb5bacb35d1] STEP: Considering event: Type = [Normal], Name = [filler-pod-aa4d1dca-c2d7-4803-a248-7cb5bacb35d1.16850fb6e1e55016], Reason = [Started], Message = [Started container filler-pod-aa4d1dca-c2d7-4803-a248-7cb5bacb35d1] STEP: Considering event: Type = [Warning], Name = [additional-pod.16850fb72558850c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node v1.21-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node v1.21-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 11:42:01.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6386" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":17,"completed":4,"skipped":649,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 11:42:01.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 3 11:42:01.373: INFO: Pod name wrapped-volume-race-26735b8b-15a7-4391-be86-b45990d7739c: Found 4 pods out of 5 Jun 3 11:42:06.382: INFO: Pod name wrapped-volume-race-26735b8b-15a7-4391-be86-b45990d7739c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-26735b8b-15a7-4391-be86-b45990d7739c in namespace emptydir-wrapper-6478, will wait for the garbage collector to delete the pods Jun 3 11:42:16.468: INFO: Deleting ReplicationController wrapped-volume-race-26735b8b-15a7-4391-be86-b45990d7739c took: 6.0885ms Jun 3 11:42:16.569: INFO: Terminating ReplicationController wrapped-volume-race-26735b8b-15a7-4391-be86-b45990d7739c pods took: 100.725912ms STEP: Creating RC which spawns configmap-volume pods Jun 3 11:42:20.691: INFO: Pod name wrapped-volume-race-d1bd1b78-72c3-4a3b-8e41-074f816d7174: Found 0 pods out of 5 Jun 3 11:42:25.783: INFO: Pod name wrapped-volume-race-d1bd1b78-72c3-4a3b-8e41-074f816d7174: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d1bd1b78-72c3-4a3b-8e41-074f816d7174 in namespace emptydir-wrapper-6478, will wait for the garbage collector to delete the pods Jun 3 11:42:38.263: INFO: Deleting ReplicationController wrapped-volume-race-d1bd1b78-72c3-4a3b-8e41-074f816d7174 took: 6.335883ms Jun 3 11:42:38.364: INFO: Terminating ReplicationController wrapped-volume-race-d1bd1b78-72c3-4a3b-8e41-074f816d7174 pods took: 101.225475ms STEP: Creating RC which spawns configmap-volume pods Jun 3 11:42:43.186: INFO: Pod name wrapped-volume-race-b65aaed9-21d4-4a15-933c-f4133c490efa: Found 0 pods out of 5 Jun 3 11:42:48.195: INFO: Pod name wrapped-volume-race-b65aaed9-21d4-4a15-933c-f4133c490efa: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b65aaed9-21d4-4a15-933c-f4133c490efa in namespace emptydir-wrapper-6478, will wait for the garbage collector to delete the pods Jun 3 11:43:00.290: INFO: Deleting ReplicationController wrapped-volume-race-b65aaed9-21d4-4a15-933c-f4133c490efa took: 6.475261ms Jun 3 11:43:00.391: INFO: Terminating ReplicationController wrapped-volume-race-b65aaed9-21d4-4a15-933c-f4133c490efa pods took: 101.123254ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 11:43:05.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6478" for this suite. • [SLOW TEST:64.406 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":17,"completed":5,"skipped":652,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 11:43:05.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 3 11:43:05.513: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 11:43:05.522: INFO: Waiting for terminating namespaces to be deleted... Jun 3 11:43:05.525: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test Jun 3 11:43:05.534: INFO: chaos-daemon-vxnd4 from default started at 2021-06-01 18:06:16 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.534: INFO: Container chaos-daemon ready: true, restart count 0 Jun 3 11:43:05.534: INFO: create-loop-devs-2zz2t from kube-system started at 2021-06-01 18:06:45 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.534: INFO: Container loopdev ready: true, restart count 0 Jun 3 11:43:05.534: INFO: kindnet-64qsq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.534: INFO: Container kindnet-cni ready: true, restart count 6 Jun 3 11:43:05.534: INFO: kube-multus-ds-vvcq9 from kube-system started at 2021-06-01 18:06:35 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.534: INFO: Container kube-multus ready: true, restart count 0 Jun 3 11:43:05.534: INFO: kube-proxy-pjm2c from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.534: INFO: Container kube-proxy ready: true, restart count 0 Jun 3 11:43:05.534: INFO: tune-sysctls-dkbjj from kube-system started at 2021-06-01 18:06:16 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.534: INFO: Container setsysctls ready: true, restart count 0 Jun 3 11:43:05.534: INFO: speaker-c7g2h from metallb-system started at 2021-06-01 18:06:16 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.534: INFO: Container speaker ready: true, restart count 0 Jun 3 11:43:05.534: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test Jun 3 11:43:05.544: INFO: chaos-controller-manager-69c479c674-6l597 from default started at 2021-05-25 17:38:09 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.544: INFO: Container chaos-mesh ready: true, restart count 0 Jun 3 11:43:05.544: INFO: chaos-daemon-zspvx from default started at 2021-05-25 17:38:09 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.544: INFO: Container chaos-daemon ready: true, restart count 0 Jun 3 11:43:05.544: INFO: dockerd from default started at 2021-05-25 17:35:22 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.544: INFO: Container dockerd ready: true, restart count 0 Jun 3 11:43:05.544: INFO: create-loop-devs-lfj6m from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.544: INFO: Container loopdev ready: true, restart count 0 Jun 3 11:43:05.544: INFO: kindnet-5xbgn from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.544: INFO: Container kindnet-cni ready: true, restart count 7 Jun 3 11:43:05.544: INFO: kube-multus-ds-chmxd from kube-system started at 2021-05-24 17:25:29 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.544: INFO: Container kube-multus ready: true, restart count 1 Jun 3 11:43:05.544: INFO: kube-proxy-wg4wq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.544: INFO: Container kube-proxy ready: true, restart count 0 Jun 3 11:43:05.544: INFO: tune-sysctls-b7rgm from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.544: INFO: Container setsysctls ready: true, restart count 0 Jun 3 11:43:05.544: INFO: dashboard-metrics-scraper-856586f554-l66m5 from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.544: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 Jun 3 11:43:05.544: INFO: kubernetes-dashboard-78c79f97b4-k777m from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.544: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jun 3 11:43:05.544: INFO: chaos-operator-ce-5754fd4b69-zxmdb from litmus started at 2021-05-25 19:03:14 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.544: INFO: Container chaos-operator ready: true, restart count 0 Jun 3 11:43:05.544: INFO: controller-675995489c-x7gj2 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.544: INFO: Container controller ready: true, restart count 0 Jun 3 11:43:05.544: INFO: speaker-lw6f6 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.544: INFO: Container speaker ready: true, restart count 0 Jun 3 11:43:05.544: INFO: contour-74948c9879-n2262 from projectcontour started at 2021-05-24 17:25:31 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.544: INFO: Container contour ready: true, restart count 0 Jun 3 11:43:05.544: INFO: contour-74948c9879-w22pr from projectcontour started at 2021-05-24 19:58:28 +0000 UTC (1 container statuses recorded) Jun 3 11:43:05.544: INFO: Container contour ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-eb143f4a-563b-4482-b247-07fd38d62f12 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-eb143f4a-563b-4482-b247-07fd38d62f12 off the node v1.21-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-eb143f4a-563b-4482-b247-07fd38d62f12 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 11:43:09.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1254" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":17,"completed":6,"skipped":1277,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 11:43:09.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 3 11:43:09.688: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:09.691: INFO: Number of nodes with available pods: 0 Jun 3 11:43:09.691: INFO: Node v1.21-worker is running more than one daemon pod Jun 3 11:43:10.696: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:10.699: INFO: Number of nodes with available pods: 0 Jun 3 11:43:10.699: INFO: Node v1.21-worker is running more than one daemon pod Jun 3 11:43:11.696: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:11.700: INFO: Number of nodes with available pods: 2 Jun 3 11:43:11.700: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 3 11:43:11.720: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:11.727: INFO: Number of nodes with available pods: 1 Jun 3 11:43:11.727: INFO: Node v1.21-worker is running more than one daemon pod Jun 3 11:43:12.733: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:12.738: INFO: Number of nodes with available pods: 1 Jun 3 11:43:12.738: INFO: Node v1.21-worker is running more than one daemon pod Jun 3 11:43:13.733: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:13.737: INFO: Number of nodes with available pods: 2 Jun 3 11:43:13.737: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2128, will wait for the garbage collector to delete the pods Jun 3 11:43:13.802: INFO: Deleting DaemonSet.extensions daemon-set took: 5.247701ms Jun 3 11:43:13.904: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.270685ms Jun 3 11:43:25.107: INFO: Number of nodes with available pods: 0 Jun 3 11:43:25.107: INFO: Number of running nodes: 0, number of available pods: 0 Jun 3 11:43:25.110: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"3305240"},"items":null} Jun 3 11:43:25.113: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"3305240"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 11:43:25.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2128" for this suite. • [SLOW TEST:15.507 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":17,"completed":7,"skipped":1601,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 11:43:25.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 11:43:31.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2739" for this suite. STEP: Destroying namespace "nsdeletetest-6990" for this suite. Jun 3 11:43:31.260: INFO: Namespace nsdeletetest-6990 was already deleted STEP: Destroying namespace "nsdeletetest-3307" for this suite. • [SLOW TEST:6.123 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":17,"completed":8,"skipped":2077,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 11:43:31.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 11:43:31.331: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 3 11:43:31.340: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:31.343: INFO: Number of nodes with available pods: 0 Jun 3 11:43:31.343: INFO: Node v1.21-worker is running more than one daemon pod Jun 3 11:43:32.349: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:32.353: INFO: Number of nodes with available pods: 1 Jun 3 11:43:32.353: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:43:33.348: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:33.352: INFO: Number of nodes with available pods: 2 Jun 3 11:43:33.352: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 3 11:43:33.380: INFO: Wrong image for pod: daemon-set-28njl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 11:43:33.381: INFO: Wrong image for pod: daemon-set-vs84c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 11:43:33.385: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:34.390: INFO: Wrong image for pod: daemon-set-vs84c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 11:43:34.395: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:35.391: INFO: Wrong image for pod: daemon-set-vs84c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 11:43:35.396: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:36.390: INFO: Wrong image for pod: daemon-set-vs84c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 11:43:36.394: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:37.391: INFO: Wrong image for pod: daemon-set-vs84c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 11:43:37.395: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:38.390: INFO: Wrong image for pod: daemon-set-vs84c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 11:43:38.395: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:39.390: INFO: Wrong image for pod: daemon-set-vs84c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 11:43:39.394: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:40.391: INFO: Wrong image for pod: daemon-set-vs84c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 11:43:40.395: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:41.390: INFO: Wrong image for pod: daemon-set-vs84c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 11:43:41.395: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:42.391: INFO: Wrong image for pod: daemon-set-vs84c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 11:43:42.395: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:43.391: INFO: Wrong image for pod: daemon-set-vs84c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 11:43:43.396: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:44.390: INFO: Wrong image for pod: daemon-set-vs84c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 11:43:44.395: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:45.390: INFO: Pod daemon-set-qf9cr is not available Jun 3 11:43:45.390: INFO: Wrong image for pod: daemon-set-vs84c. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.32, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1. Jun 3 11:43:45.395: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:46.394: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:47.394: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:48.390: INFO: Pod daemon-set-mnj46 is not available Jun 3 11:43:48.395: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 3 11:43:48.400: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:48.404: INFO: Number of nodes with available pods: 1 Jun 3 11:43:48.404: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:43:49.411: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:43:49.415: INFO: Number of nodes with available pods: 2 Jun 3 11:43:49.415: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2289, will wait for the garbage collector to delete the pods Jun 3 11:43:49.494: INFO: Deleting DaemonSet.extensions daemon-set took: 5.98035ms Jun 3 11:43:49.594: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.682796ms Jun 3 11:43:55.498: INFO: Number of nodes with available pods: 0 Jun 3 11:43:55.498: INFO: Number of running nodes: 0, number of available pods: 0 Jun 3 11:43:55.501: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"3305473"},"items":null} Jun 3 11:43:55.504: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"3305473"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 11:43:55.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2289" for this suite. • [SLOW TEST:24.248 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":17,"completed":9,"skipped":2905,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 11:43:55.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 3 11:43:55.579: INFO: Waiting up to 1m0s for all nodes to be ready Jun 3 11:44:55.637: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create pods that use 2/3 of node resources. Jun 3 11:44:55.665: INFO: Created pod: pod0-sched-preemption-low-priority Jun 3 11:44:55.687: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 11:45:04.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7606" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:68.620 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":17,"completed":10,"skipped":3513,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 11:45:04.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 11:45:04.205: INFO: Create a RollingUpdate DaemonSet Jun 3 11:45:04.210: INFO: Check that daemon pods launch on every node of the cluster Jun 3 11:45:04.214: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:45:04.216: INFO: Number of nodes with available pods: 0 Jun 3 11:45:04.216: INFO: Node v1.21-worker is running more than one daemon pod Jun 3 11:45:05.222: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:45:05.226: INFO: Number of nodes with available pods: 0 Jun 3 11:45:05.226: INFO: Node v1.21-worker is running more than one daemon pod Jun 3 11:45:06.279: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:45:06.283: INFO: Number of nodes with available pods: 2 Jun 3 11:45:06.283: INFO: Number of running nodes: 2, number of available pods: 2 Jun 3 11:45:06.283: INFO: Update the DaemonSet to trigger a rollout Jun 3 11:45:06.291: INFO: Updating DaemonSet daemon-set Jun 3 11:45:15.394: INFO: Roll back the DaemonSet before rollout is complete Jun 3 11:45:15.403: INFO: Updating DaemonSet daemon-set Jun 3 11:45:15.403: INFO: Make sure DaemonSet rollback is complete Jun 3 11:45:15.407: INFO: Wrong image for pod: daemon-set-d6d4t. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1, got: foo:non-existent. Jun 3 11:45:15.407: INFO: Pod daemon-set-d6d4t is not available Jun 3 11:45:15.412: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:45:16.422: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:45:17.422: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:45:18.422: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:45:19.422: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:45:20.422: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:45:21.421: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:45:22.421: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:45:23.422: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:45:24.486: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:45:25.417: INFO: Pod daemon-set-hfmv2 is not available Jun 3 11:45:25.421: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3180, will wait for the garbage collector to delete the pods Jun 3 11:45:25.487: INFO: Deleting DaemonSet.extensions daemon-set took: 5.344156ms Jun 3 11:45:25.588: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.306049ms Jun 3 11:45:35.492: INFO: Number of nodes with available pods: 0 Jun 3 11:45:35.492: INFO: Number of running nodes: 0, number of available pods: 0 Jun 3 11:45:35.495: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"3305914"},"items":null} Jun 3 11:45:35.498: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"3305914"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 11:45:35.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3180" for this suite. • [SLOW TEST:31.363 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":17,"completed":11,"skipped":3553,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 11:45:35.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 3 11:45:35.569: INFO: Waiting up to 1m0s for all nodes to be ready Jun 3 11:46:35.617: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 11:46:35.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Jun 3 11:46:37.689: INFO: found a healthy node: v1.21-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 11:46:43.767: INFO: pods created so far: [1 1 1] Jun 3 11:46:43.767: INFO: length of pods created so far: 3 Jun 3 11:46:59.778: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 11:47:06.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-5356" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 11:47:06.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1522" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:91.351 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":17,"completed":12,"skipped":3895,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 11:47:06.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 11:47:35.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1866" for this suite. STEP: Destroying namespace "nsdeletetest-9301" for this suite. Jun 3 11:47:36.006: INFO: Namespace nsdeletetest-9301 was already deleted STEP: Destroying namespace "nsdeletetest-3595" for this suite. • [SLOW TEST:29.132 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":17,"completed":13,"skipped":3999,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 11:47:36.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 3 11:47:36.072: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:47:36.075: INFO: Number of nodes with available pods: 0 Jun 3 11:47:36.075: INFO: Node v1.21-worker is running more than one daemon pod Jun 3 11:47:37.081: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:47:37.085: INFO: Number of nodes with available pods: 0 Jun 3 11:47:37.085: INFO: Node v1.21-worker is running more than one daemon pod Jun 3 11:47:38.081: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:47:38.085: INFO: Number of nodes with available pods: 2 Jun 3 11:47:38.085: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 3 11:47:38.102: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:47:38.106: INFO: Number of nodes with available pods: 1 Jun 3 11:47:38.106: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:47:39.112: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:47:39.115: INFO: Number of nodes with available pods: 1 Jun 3 11:47:39.115: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:47:40.112: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:47:40.116: INFO: Number of nodes with available pods: 1 Jun 3 11:47:40.116: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:47:41.179: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:47:41.183: INFO: Number of nodes with available pods: 1 Jun 3 11:47:41.183: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:47:42.112: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:47:42.116: INFO: Number of nodes with available pods: 1 Jun 3 11:47:42.116: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:47:43.112: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:47:43.115: INFO: Number of nodes with available pods: 1 Jun 3 11:47:43.115: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:47:44.112: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:47:44.117: INFO: Number of nodes with available pods: 1 Jun 3 11:47:44.117: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:47:45.112: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:47:45.116: INFO: Number of nodes with available pods: 1 Jun 3 11:47:45.116: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:47:46.112: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:47:46.116: INFO: Number of nodes with available pods: 1 Jun 3 11:47:46.116: INFO: Node v1.21-worker2 is running more than one daemon pod Jun 3 11:47:47.112: INFO: DaemonSet pods can't tolerate node v1.21-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 3 11:47:47.116: INFO: Number of nodes with available pods: 2 Jun 3 11:47:47.116: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1952, will wait for the garbage collector to delete the pods Jun 3 11:47:47.180: INFO: Deleting DaemonSet.extensions daemon-set took: 6.547707ms Jun 3 11:47:47.280: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.279096ms Jun 3 11:47:55.484: INFO: Number of nodes with available pods: 0 Jun 3 11:47:55.484: INFO: Number of running nodes: 0, number of available pods: 0 Jun 3 11:47:55.487: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"3306597"},"items":null} Jun 3 11:47:55.490: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"3306597"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 11:47:55.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1952" for this suite. • [SLOW TEST:19.496 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":17,"completed":14,"skipped":4261,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 11:47:55.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 3 11:47:55.550: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 11:47:55.558: INFO: Waiting for terminating namespaces to be deleted... Jun 3 11:47:55.562: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test Jun 3 11:47:55.571: INFO: chaos-daemon-vxnd4 from default started at 2021-06-01 18:06:16 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.571: INFO: Container chaos-daemon ready: true, restart count 0 Jun 3 11:47:55.571: INFO: create-loop-devs-2zz2t from kube-system started at 2021-06-01 18:06:45 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.571: INFO: Container loopdev ready: true, restart count 0 Jun 3 11:47:55.571: INFO: kindnet-64qsq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.571: INFO: Container kindnet-cni ready: true, restart count 6 Jun 3 11:47:55.571: INFO: kube-multus-ds-vvcq9 from kube-system started at 2021-06-01 18:06:35 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.571: INFO: Container kube-multus ready: true, restart count 0 Jun 3 11:47:55.571: INFO: kube-proxy-pjm2c from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.571: INFO: Container kube-proxy ready: true, restart count 0 Jun 3 11:47:55.571: INFO: tune-sysctls-dkbjj from kube-system started at 2021-06-01 18:06:16 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.571: INFO: Container setsysctls ready: true, restart count 0 Jun 3 11:47:55.571: INFO: speaker-c7g2h from metallb-system started at 2021-06-01 18:06:16 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.571: INFO: Container speaker ready: true, restart count 0 Jun 3 11:47:55.571: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test Jun 3 11:47:55.581: INFO: chaos-controller-manager-69c479c674-6l597 from default started at 2021-05-25 17:38:09 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.581: INFO: Container chaos-mesh ready: true, restart count 0 Jun 3 11:47:55.581: INFO: chaos-daemon-zspvx from default started at 2021-05-25 17:38:09 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.581: INFO: Container chaos-daemon ready: true, restart count 0 Jun 3 11:47:55.581: INFO: dockerd from default started at 2021-05-25 17:35:22 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.581: INFO: Container dockerd ready: true, restart count 0 Jun 3 11:47:55.581: INFO: create-loop-devs-lfj6m from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.581: INFO: Container loopdev ready: true, restart count 0 Jun 3 11:47:55.581: INFO: kindnet-5xbgn from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.581: INFO: Container kindnet-cni ready: true, restart count 7 Jun 3 11:47:55.581: INFO: kube-multus-ds-chmxd from kube-system started at 2021-05-24 17:25:29 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.581: INFO: Container kube-multus ready: true, restart count 1 Jun 3 11:47:55.581: INFO: kube-proxy-wg4wq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.581: INFO: Container kube-proxy ready: true, restart count 0 Jun 3 11:47:55.581: INFO: tune-sysctls-b7rgm from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.581: INFO: Container setsysctls ready: true, restart count 0 Jun 3 11:47:55.581: INFO: dashboard-metrics-scraper-856586f554-l66m5 from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.581: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 Jun 3 11:47:55.581: INFO: kubernetes-dashboard-78c79f97b4-k777m from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.581: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jun 3 11:47:55.581: INFO: chaos-operator-ce-5754fd4b69-zxmdb from litmus started at 2021-05-25 19:03:14 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.581: INFO: Container chaos-operator ready: true, restart count 0 Jun 3 11:47:55.581: INFO: controller-675995489c-x7gj2 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.581: INFO: Container controller ready: true, restart count 0 Jun 3 11:47:55.581: INFO: speaker-lw6f6 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.581: INFO: Container speaker ready: true, restart count 0 Jun 3 11:47:55.581: INFO: contour-74948c9879-n2262 from projectcontour started at 2021-05-24 17:25:31 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.581: INFO: Container contour ready: true, restart count 0 Jun 3 11:47:55.581: INFO: contour-74948c9879-w22pr from projectcontour started at 2021-05-24 19:58:28 +0000 UTC (1 container statuses recorded) Jun 3 11:47:55.581: INFO: Container contour ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16851009f0634b9a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity/selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 11:47:56.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7298" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":17,"completed":15,"skipped":4501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 11:47:56.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 3 11:47:56.677: INFO: Waiting up to 1m0s for all nodes to be ready Jun 3 11:48:56.736: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 11:48:56.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jun 3 11:48:56.812: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Jun 3 11:48:56.816: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 11:48:56.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-5541" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 11:48:56.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-209" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.266 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":17,"completed":16,"skipped":4635,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 11:48:56.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 3 11:48:56.936: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 11:48:56.945: INFO: Waiting for terminating namespaces to be deleted... Jun 3 11:48:56.948: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test Jun 3 11:48:56.956: INFO: chaos-daemon-vxnd4 from default started at 2021-06-01 18:06:16 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.957: INFO: Container chaos-daemon ready: true, restart count 0 Jun 3 11:48:56.957: INFO: create-loop-devs-2zz2t from kube-system started at 2021-06-01 18:06:45 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.957: INFO: Container loopdev ready: true, restart count 0 Jun 3 11:48:56.957: INFO: kindnet-64qsq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.957: INFO: Container kindnet-cni ready: true, restart count 6 Jun 3 11:48:56.957: INFO: kube-multus-ds-vvcq9 from kube-system started at 2021-06-01 18:06:35 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.957: INFO: Container kube-multus ready: true, restart count 0 Jun 3 11:48:56.957: INFO: kube-proxy-pjm2c from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.957: INFO: Container kube-proxy ready: true, restart count 0 Jun 3 11:48:56.957: INFO: tune-sysctls-dkbjj from kube-system started at 2021-06-01 18:06:16 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.957: INFO: Container setsysctls ready: true, restart count 0 Jun 3 11:48:56.957: INFO: speaker-c7g2h from metallb-system started at 2021-06-01 18:06:16 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.957: INFO: Container speaker ready: true, restart count 0 Jun 3 11:48:56.957: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test Jun 3 11:48:56.966: INFO: chaos-controller-manager-69c479c674-6l597 from default started at 2021-05-25 17:38:09 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.966: INFO: Container chaos-mesh ready: true, restart count 0 Jun 3 11:48:56.966: INFO: chaos-daemon-zspvx from default started at 2021-05-25 17:38:09 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.966: INFO: Container chaos-daemon ready: true, restart count 0 Jun 3 11:48:56.966: INFO: dockerd from default started at 2021-05-25 17:35:22 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.966: INFO: Container dockerd ready: true, restart count 0 Jun 3 11:48:56.966: INFO: create-loop-devs-lfj6m from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.966: INFO: Container loopdev ready: true, restart count 0 Jun 3 11:48:56.966: INFO: kindnet-5xbgn from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.966: INFO: Container kindnet-cni ready: true, restart count 7 Jun 3 11:48:56.966: INFO: kube-multus-ds-chmxd from kube-system started at 2021-05-24 17:25:29 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.966: INFO: Container kube-multus ready: true, restart count 1 Jun 3 11:48:56.966: INFO: kube-proxy-wg4wq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.966: INFO: Container kube-proxy ready: true, restart count 0 Jun 3 11:48:56.966: INFO: tune-sysctls-b7rgm from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.966: INFO: Container setsysctls ready: true, restart count 0 Jun 3 11:48:56.966: INFO: dashboard-metrics-scraper-856586f554-l66m5 from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.966: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 Jun 3 11:48:56.966: INFO: kubernetes-dashboard-78c79f97b4-k777m from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.966: INFO: Container kubernetes-dashboard ready: true, restart count 0 Jun 3 11:48:56.966: INFO: chaos-operator-ce-5754fd4b69-zxmdb from litmus started at 2021-05-25 19:03:14 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.966: INFO: Container chaos-operator ready: true, restart count 0 Jun 3 11:48:56.966: INFO: controller-675995489c-x7gj2 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.966: INFO: Container controller ready: true, restart count 0 Jun 3 11:48:56.966: INFO: speaker-lw6f6 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.966: INFO: Container speaker ready: true, restart count 0 Jun 3 11:48:56.966: INFO: contour-74948c9879-n2262 from projectcontour started at 2021-05-24 17:25:31 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.966: INFO: Container contour ready: true, restart count 0 Jun 3 11:48:56.966: INFO: contour-74948c9879-w22pr from projectcontour started at 2021-05-24 19:58:28 +0000 UTC (1 container statuses recorded) Jun 3 11:48:56.966: INFO: Container contour ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-9f2b5d9e-a7ff-4c8a-b282-1235f92b5e62 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.18.0.4 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-9f2b5d9e-a7ff-4c8a-b282-1235f92b5e62 off the node v1.21-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-9f2b5d9e-a7ff-4c8a-b282-1235f92b5e62 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 11:54:01.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7389" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:304.163 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":17,"completed":17,"skipped":5545,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJun 3 11:54:01.073: INFO: Running AfterSuite actions on all nodes Jun 3 11:54:01.073: INFO: Running AfterSuite actions on node 1 Jun 3 11:54:01.073: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":17,"completed":17,"skipped":5754,"failed":0} Ran 17 of 5771 Specs in 827.402 seconds SUCCESS! -- 17 Passed | 0 Failed | 0 Pending | 5754 Skipped PASS Ginkgo ran 1 suite in 13m49.092272903s Test Suite Passed