I0827 13:32:02.267519 17 test_context.go:457] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0827 13:32:02.267774 17 e2e.go:129] Starting e2e run "c6f7d824-e95e-4a07-8906-84eab1863b78" on Ginkgo node 1 {"msg":"Test Suite starting","total":18,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1630071120 - Will randomize all specs Will run 18 of 5668 specs Aug 27 13:32:02.360: INFO: >>> kubeConfig: /root/.kube/config Aug 27 13:32:02.364: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 27 13:32:02.391: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 27 13:32:02.439: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 27 13:32:02.439: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 27 13:32:02.439: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 27 13:32:02.448: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 27 13:32:02.448: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 27 13:32:02.448: INFO: e2e test version: v1.20.10 Aug 27 13:32:02.450: INFO: kube-apiserver version: v1.20.7 Aug 27 13:32:02.450: INFO: >>> kubeConfig: /root/.kube/config Aug 27 13:32:02.457: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 13:32:02.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces Aug 27 13:32:02.503: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 27 13:32:02.513: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 13:32:02.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8768" for this suite. STEP: Destroying namespace "nspatchtest-7679d139-e666-42d7-b0ac-80db2cbb9a18-2143" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":18,"completed":1,"skipped":482,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 13:32:02.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 13:32:15.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4744" for this suite. STEP: Destroying namespace "nsdeletetest-9361" for this suite. Aug 27 13:32:15.713: INFO: Namespace nsdeletetest-9361 was already deleted STEP: Destroying namespace "nsdeletetest-8849" for this suite. • [SLOW TEST:13.150 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":18,"completed":2,"skipped":618,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 13:32:15.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Aug 27 13:32:15.764: INFO: Waiting up to 1m0s for all nodes to be ready Aug 27 13:33:15.794: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create pods that use 2/3 of node resources. Aug 27 13:33:15.820: INFO: Created pod: pod0-sched-preemption-low-priority Aug 27 13:33:15.850: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 13:33:37.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-771" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:82.219 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":18,"completed":3,"skipped":880,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 13:33:37.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Aug 27 13:33:37.984: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 27 13:33:37.993: INFO: Waiting for terminating namespaces to be deleted... Aug 27 13:33:37.997: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-5brzb before test Aug 27 13:33:38.004: INFO: astaire-58968c8b7f-2cfpc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:33:38.004: INFO: Container astaire ready: true, restart count 0 Aug 27 13:33:38.004: INFO: Container tailer ready: true, restart count 0 Aug 27 13:33:38.004: INFO: cassandra-5b9d7c8d97-mtg6p from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:33:38.004: INFO: Container cassandra ready: true, restart count 0 Aug 27 13:33:38.004: INFO: ellis-6d4bcd9976-wjzcr from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:33:38.004: INFO: Container ellis ready: true, restart count 0 Aug 27 13:33:38.004: INFO: homer-74f8c889f9-dp4pj from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:33:38.004: INFO: Container homer ready: true, restart count 0 Aug 27 13:33:38.004: INFO: homestead-f47c95f88-r5gtl from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:33:38.004: INFO: Container homestead ready: true, restart count 0 Aug 27 13:33:38.004: INFO: Container tailer ready: true, restart count 0 Aug 27 13:33:38.004: INFO: kindnet-b64vj from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 13:33:38.004: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 13:33:38.005: INFO: kube-proxy-6wb6p from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 13:33:38.005: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 13:33:38.005: INFO: preemptor-pod from sched-preemption-771 started at 2021-08-27 13:33:35 +0000 UTC (1 container statuses recorded) Aug 27 13:33:38.005: INFO: Container preemptor-pod ready: true, restart count 0 Aug 27 13:33:38.005: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-tw45m before test Aug 27 13:33:38.012: INFO: bono-6957967566-mbkl6 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:33:38.012: INFO: Container bono ready: true, restart count 0 Aug 27 13:33:38.012: INFO: Container tailer ready: true, restart count 0 Aug 27 13:33:38.012: INFO: chronos-f6f76cf57-29d9g from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:33:38.012: INFO: Container chronos ready: true, restart count 0 Aug 27 13:33:38.012: INFO: Container tailer ready: true, restart count 0 Aug 27 13:33:38.012: INFO: etcd-744b4d9f98-wlr24 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:33:38.012: INFO: Container etcd ready: true, restart count 0 Aug 27 13:33:38.012: INFO: homestead-prov-77b78dd7f8-nz7qc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:33:38.012: INFO: Container homestead-prov ready: true, restart count 0 Aug 27 13:33:38.012: INFO: ralf-8597986d58-p7crz from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:33:38.012: INFO: Container ralf ready: true, restart count 0 Aug 27 13:33:38.012: INFO: Container tailer ready: true, restart count 0 Aug 27 13:33:38.012: INFO: sprout-58578d4fcd-89l45 from ims-ftg7f started at 2021-08-27 08:50:39 +0000 UTC (2 container statuses recorded) Aug 27 13:33:38.012: INFO: Container sprout ready: true, restart count 0 Aug 27 13:33:38.012: INFO: Container tailer ready: true, restart count 0 Aug 27 13:33:38.012: INFO: kindnet-fp7vq from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 13:33:38.012: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 13:33:38.013: INFO: kube-proxy-kg48d from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 13:33:38.013: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 13:33:38.013: INFO: pod1-sched-preemption-medium-priority from sched-preemption-771 started at 2021-08-27 13:33:21 +0000 UTC (1 container statuses recorded) Aug 27 13:33:38.013: INFO: Container pod1-sched-preemption-medium-priority ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4dcad259-6171-4e0a-9b6e-1b4a0ef33adc 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.23.0.8 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-4dcad259-6171-4e0a-9b6e-1b4a0ef33adc off the node capi-leguer-md-0-555f949c67-5brzb STEP: verifying the node doesn't have the label kubernetes.io/e2e-4dcad259-6171-4e0a-9b6e-1b4a0ef33adc [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 13:38:42.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3634" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:304.172 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":18,"completed":4,"skipped":965,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 13:38:42.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Aug 27 13:38:42.172: INFO: Waiting up to 1m0s for all nodes to be ready Aug 27 13:39:42.202: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 13:39:42.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Aug 27 13:39:44.286: INFO: found a healthy node: capi-leguer-md-0-555f949c67-5brzb [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Aug 27 13:39:52.372: INFO: pods created so far: [1 1 1] Aug 27 13:39:52.372: INFO: length of pods created so far: 3 Aug 27 13:39:56.384: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 13:40:03.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-5208" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 13:40:03.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5725" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:81.368 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":18,"completed":5,"skipped":978,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 13:40:03.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Aug 27 13:40:03.524: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 27 13:40:03.532: INFO: Waiting for terminating namespaces to be deleted... Aug 27 13:40:03.536: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-5brzb before test Aug 27 13:40:03.543: INFO: astaire-58968c8b7f-2cfpc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:40:03.543: INFO: Container astaire ready: true, restart count 0 Aug 27 13:40:03.543: INFO: Container tailer ready: true, restart count 0 Aug 27 13:40:03.543: INFO: cassandra-5b9d7c8d97-mtg6p from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:40:03.543: INFO: Container cassandra ready: true, restart count 0 Aug 27 13:40:03.543: INFO: ellis-6d4bcd9976-wjzcr from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:40:03.543: INFO: Container ellis ready: true, restart count 0 Aug 27 13:40:03.543: INFO: homer-74f8c889f9-dp4pj from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:40:03.543: INFO: Container homer ready: true, restart count 0 Aug 27 13:40:03.543: INFO: homestead-f47c95f88-r5gtl from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:40:03.543: INFO: Container homestead ready: true, restart count 0 Aug 27 13:40:03.543: INFO: Container tailer ready: true, restart count 0 Aug 27 13:40:03.543: INFO: kindnet-b64vj from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 13:40:03.543: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 13:40:03.543: INFO: kube-proxy-6wb6p from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 13:40:03.543: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 13:40:03.543: INFO: pod4 from sched-preemption-path-5208 started at 2021-08-27 13:39:54 +0000 UTC (1 container statuses recorded) Aug 27 13:40:03.543: INFO: Container pod4 ready: true, restart count 0 Aug 27 13:40:03.543: INFO: rs-pod3-64kvr from sched-preemption-path-5208 started at 2021-08-27 13:39:50 +0000 UTC (1 container statuses recorded) Aug 27 13:40:03.543: INFO: Container pod3 ready: true, restart count 0 Aug 27 13:40:03.543: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-tw45m before test Aug 27 13:40:03.550: INFO: bono-6957967566-mbkl6 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:40:03.550: INFO: Container bono ready: true, restart count 0 Aug 27 13:40:03.550: INFO: Container tailer ready: true, restart count 0 Aug 27 13:40:03.550: INFO: chronos-f6f76cf57-29d9g from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:40:03.550: INFO: Container chronos ready: true, restart count 0 Aug 27 13:40:03.550: INFO: Container tailer ready: true, restart count 0 Aug 27 13:40:03.550: INFO: etcd-744b4d9f98-wlr24 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:40:03.550: INFO: Container etcd ready: true, restart count 0 Aug 27 13:40:03.550: INFO: homestead-prov-77b78dd7f8-nz7qc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:40:03.550: INFO: Container homestead-prov ready: true, restart count 0 Aug 27 13:40:03.550: INFO: ralf-8597986d58-p7crz from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:40:03.550: INFO: Container ralf ready: true, restart count 0 Aug 27 13:40:03.550: INFO: Container tailer ready: true, restart count 0 Aug 27 13:40:03.550: INFO: sprout-58578d4fcd-89l45 from ims-ftg7f started at 2021-08-27 08:50:39 +0000 UTC (2 container statuses recorded) Aug 27 13:40:03.550: INFO: Container sprout ready: true, restart count 0 Aug 27 13:40:03.550: INFO: Container tailer ready: true, restart count 0 Aug 27 13:40:03.550: INFO: kindnet-fp7vq from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 13:40:03.550: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 13:40:03.550: INFO: kube-proxy-kg48d from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 13:40:03.550: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: verifying the node has the label node capi-leguer-md-0-555f949c67-5brzb STEP: verifying the node has the label node capi-leguer-md-0-555f949c67-tw45m Aug 27 13:40:09.630: INFO: Pod astaire-58968c8b7f-2cfpc requesting resource cpu=0m on Node capi-leguer-md-0-555f949c67-5brzb Aug 27 13:40:09.630: INFO: Pod bono-6957967566-mbkl6 requesting resource cpu=0m on Node capi-leguer-md-0-555f949c67-tw45m Aug 27 13:40:09.630: INFO: Pod cassandra-5b9d7c8d97-mtg6p requesting resource cpu=0m on Node capi-leguer-md-0-555f949c67-5brzb Aug 27 13:40:09.630: INFO: Pod chronos-f6f76cf57-29d9g requesting resource cpu=0m on Node capi-leguer-md-0-555f949c67-tw45m Aug 27 13:40:09.630: INFO: Pod ellis-6d4bcd9976-wjzcr requesting resource cpu=0m on Node capi-leguer-md-0-555f949c67-5brzb Aug 27 13:40:09.630: INFO: Pod etcd-744b4d9f98-wlr24 requesting resource cpu=0m on Node capi-leguer-md-0-555f949c67-tw45m Aug 27 13:40:09.630: INFO: Pod homer-74f8c889f9-dp4pj requesting resource cpu=0m on Node capi-leguer-md-0-555f949c67-5brzb Aug 27 13:40:09.630: INFO: Pod homestead-f47c95f88-r5gtl requesting resource cpu=0m on Node capi-leguer-md-0-555f949c67-5brzb Aug 27 13:40:09.630: INFO: Pod homestead-prov-77b78dd7f8-nz7qc requesting resource cpu=0m on Node capi-leguer-md-0-555f949c67-tw45m Aug 27 13:40:09.630: INFO: Pod ralf-8597986d58-p7crz requesting resource cpu=0m on Node capi-leguer-md-0-555f949c67-tw45m Aug 27 13:40:09.630: INFO: Pod sprout-58578d4fcd-89l45 requesting resource cpu=0m on Node capi-leguer-md-0-555f949c67-tw45m Aug 27 13:40:09.630: INFO: Pod kindnet-b64vj requesting resource cpu=100m on Node capi-leguer-md-0-555f949c67-5brzb Aug 27 13:40:09.630: INFO: Pod kindnet-fp7vq requesting resource cpu=100m on Node capi-leguer-md-0-555f949c67-tw45m Aug 27 13:40:09.630: INFO: Pod kube-proxy-6wb6p requesting resource cpu=0m on Node capi-leguer-md-0-555f949c67-5brzb Aug 27 13:40:09.630: INFO: Pod kube-proxy-kg48d requesting resource cpu=0m on Node capi-leguer-md-0-555f949c67-tw45m Aug 27 13:40:09.630: INFO: Pod pod4 requesting resource cpu=0m on Node capi-leguer-md-0-555f949c67-5brzb STEP: Starting Pods to consume most of the cluster CPU. Aug 27 13:40:09.630: INFO: Creating a pod which consumes cpu=61530m on Node capi-leguer-md-0-555f949c67-5brzb Aug 27 13:40:09.638: INFO: Creating a pod which consumes cpu=61530m on Node capi-leguer-md-0-555f949c67-tw45m STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-8caf096f-4d59-4c29-bfc2-dbc498f9d09c.169f2d7e135a0823], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6452/filler-pod-8caf096f-4d59-4c29-bfc2-dbc498f9d09c to capi-leguer-md-0-555f949c67-tw45m] STEP: Considering event: Type = [Normal], Name = [filler-pod-8caf096f-4d59-4c29-bfc2-dbc498f9d09c.169f2d7e423adaf0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-8caf096f-4d59-4c29-bfc2-dbc498f9d09c.169f2d7e438798af], Reason = [Created], Message = [Created container filler-pod-8caf096f-4d59-4c29-bfc2-dbc498f9d09c] STEP: Considering event: Type = [Normal], Name = [filler-pod-8caf096f-4d59-4c29-bfc2-dbc498f9d09c.169f2d7e51f0dc87], Reason = [Started], Message = [Started container filler-pod-8caf096f-4d59-4c29-bfc2-dbc498f9d09c] STEP: Considering event: Type = [Normal], Name = [filler-pod-d2dc2cf9-5122-429e-816f-3118a820ca2d.169f2d7e13372e1e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6452/filler-pod-d2dc2cf9-5122-429e-816f-3118a820ca2d to capi-leguer-md-0-555f949c67-5brzb] STEP: Considering event: Type = [Normal], Name = [filler-pod-d2dc2cf9-5122-429e-816f-3118a820ca2d.169f2d7e41e95159], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d2dc2cf9-5122-429e-816f-3118a820ca2d.169f2d7e4387db8c], Reason = [Created], Message = [Created container filler-pod-d2dc2cf9-5122-429e-816f-3118a820ca2d] STEP: Considering event: Type = [Normal], Name = [filler-pod-d2dc2cf9-5122-429e-816f-3118a820ca2d.169f2d7e51eb93e7], Reason = [Started], Message = [Started container filler-pod-d2dc2cf9-5122-429e-816f-3118a820ca2d] STEP: Considering event: Type = [Warning], Name = [additional-pod.169f2d7e8bc13079], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node capi-leguer-md-0-555f949c67-5brzb STEP: verifying the node doesn't have the label node STEP: removing the label node off the node capi-leguer-md-0-555f949c67-tw45m STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 13:40:12.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6452" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:9.226 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":18,"completed":6,"skipped":1012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 13:40:12.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Aug 27 13:40:13.030: INFO: Pod name wrapped-volume-race-18e4824b-31bf-43a1-84a3-ec1c9e4eeb9e: Found 3 pods out of 5 Aug 27 13:40:18.038: INFO: Pod name wrapped-volume-race-18e4824b-31bf-43a1-84a3-ec1c9e4eeb9e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-18e4824b-31bf-43a1-84a3-ec1c9e4eeb9e in namespace emptydir-wrapper-7016, will wait for the garbage collector to delete the pods Aug 27 13:40:28.129: INFO: Deleting ReplicationController wrapped-volume-race-18e4824b-31bf-43a1-84a3-ec1c9e4eeb9e took: 7.712383ms Aug 27 13:40:28.630: INFO: Terminating ReplicationController wrapped-volume-race-18e4824b-31bf-43a1-84a3-ec1c9e4eeb9e pods took: 500.280874ms STEP: Creating RC which spawns configmap-volume pods Aug 27 13:40:35.662: INFO: Pod name wrapped-volume-race-a5bcad90-0cf4-4a3b-bf61-dd4fd90e13d2: Found 0 pods out of 5 Aug 27 13:40:40.670: INFO: Pod name wrapped-volume-race-a5bcad90-0cf4-4a3b-bf61-dd4fd90e13d2: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a5bcad90-0cf4-4a3b-bf61-dd4fd90e13d2 in namespace emptydir-wrapper-7016, will wait for the garbage collector to delete the pods Aug 27 13:40:50.759: INFO: Deleting ReplicationController wrapped-volume-race-a5bcad90-0cf4-4a3b-bf61-dd4fd90e13d2 took: 8.25125ms Aug 27 13:40:51.260: INFO: Terminating ReplicationController wrapped-volume-race-a5bcad90-0cf4-4a3b-bf61-dd4fd90e13d2 pods took: 500.22398ms STEP: Creating RC which spawns configmap-volume pods Aug 27 13:40:55.780: INFO: Pod name wrapped-volume-race-6a880b89-eab9-4391-9e8a-6443deeeda79: Found 0 pods out of 5 Aug 27 13:41:00.790: INFO: Pod name wrapped-volume-race-6a880b89-eab9-4391-9e8a-6443deeeda79: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6a880b89-eab9-4391-9e8a-6443deeeda79 in namespace emptydir-wrapper-7016, will wait for the garbage collector to delete the pods Aug 27 13:41:10.880: INFO: Deleting ReplicationController wrapped-volume-race-6a880b89-eab9-4391-9e8a-6443deeeda79 took: 8.318491ms Aug 27 13:41:11.381: INFO: Terminating ReplicationController wrapped-volume-race-6a880b89-eab9-4391-9e8a-6443deeeda79 pods took: 500.273133ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 13:41:14.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7016" for this suite. • [SLOW TEST:62.263 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":18,"completed":7,"skipped":1551,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 13:41:14.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Aug 27 13:41:15.050: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 27 13:41:15.058: INFO: Number of nodes with available pods: 0 Aug 27 13:41:15.058: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 27 13:41:15.076: INFO: Number of nodes with available pods: 0 Aug 27 13:41:15.076: INFO: Node capi-leguer-md-0-555f949c67-tw45m is running more than one daemon pod Aug 27 13:41:16.081: INFO: Number of nodes with available pods: 0 Aug 27 13:41:16.081: INFO: Node capi-leguer-md-0-555f949c67-tw45m is running more than one daemon pod Aug 27 13:41:17.081: INFO: Number of nodes with available pods: 1 Aug 27 13:41:17.081: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 27 13:41:17.099: INFO: Number of nodes with available pods: 1 Aug 27 13:41:17.099: INFO: Number of running nodes: 0, number of available pods: 1 Aug 27 13:41:18.104: INFO: Number of nodes with available pods: 0 Aug 27 13:41:18.104: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 27 13:41:18.120: INFO: Number of nodes with available pods: 0 Aug 27 13:41:18.120: INFO: Node capi-leguer-md-0-555f949c67-tw45m is running more than one daemon pod Aug 27 13:41:19.125: INFO: Number of nodes with available pods: 0 Aug 27 13:41:19.125: INFO: Node capi-leguer-md-0-555f949c67-tw45m is running more than one daemon pod Aug 27 13:41:20.124: INFO: Number of nodes with available pods: 0 Aug 27 13:41:20.124: INFO: Node capi-leguer-md-0-555f949c67-tw45m is running more than one daemon pod Aug 27 13:41:21.125: INFO: Number of nodes with available pods: 0 Aug 27 13:41:21.125: INFO: Node capi-leguer-md-0-555f949c67-tw45m is running more than one daemon pod Aug 27 13:41:22.125: INFO: Number of nodes with available pods: 0 Aug 27 13:41:22.125: INFO: Node capi-leguer-md-0-555f949c67-tw45m is running more than one daemon pod Aug 27 13:41:23.125: INFO: Number of nodes with available pods: 0 Aug 27 13:41:23.125: INFO: Node capi-leguer-md-0-555f949c67-tw45m is running more than one daemon pod Aug 27 13:41:24.125: INFO: Number of nodes with available pods: 0 Aug 27 13:41:24.125: INFO: Node capi-leguer-md-0-555f949c67-tw45m is running more than one daemon pod Aug 27 13:41:25.125: INFO: Number of nodes with available pods: 0 Aug 27 13:41:25.125: INFO: Node capi-leguer-md-0-555f949c67-tw45m is running more than one daemon pod Aug 27 13:41:26.125: INFO: Number of nodes with available pods: 0 Aug 27 13:41:26.125: INFO: Node capi-leguer-md-0-555f949c67-tw45m is running more than one daemon pod Aug 27 13:41:27.125: INFO: Number of nodes with available pods: 1 Aug 27 13:41:27.125: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7170, will wait for the garbage collector to delete the pods Aug 27 13:41:27.193: INFO: Deleting DaemonSet.extensions daemon-set took: 7.266576ms Aug 27 13:41:27.293: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.227092ms Aug 27 13:41:35.697: INFO: Number of nodes with available pods: 0 Aug 27 13:41:35.697: INFO: Number of running nodes: 0, number of available pods: 0 Aug 27 13:41:35.704: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"46026"},"items":null} Aug 27 13:41:35.708: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"46026"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 13:41:35.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7170" for this suite. • [SLOW TEST:20.753 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":18,"completed":8,"skipped":1683,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 13:41:35.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Aug 27 13:41:35.809: INFO: Create a RollingUpdate DaemonSet Aug 27 13:41:35.814: INFO: Check that daemon pods launch on every node of the cluster Aug 27 13:41:35.818: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:41:35.821: INFO: Number of nodes with available pods: 0 Aug 27 13:41:35.821: INFO: Node capi-leguer-md-0-555f949c67-5brzb is running more than one daemon pod Aug 27 13:41:36.828: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:41:36.831: INFO: Number of nodes with available pods: 0 Aug 27 13:41:36.831: INFO: Node capi-leguer-md-0-555f949c67-5brzb is running more than one daemon pod Aug 27 13:41:37.827: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:41:37.831: INFO: Number of nodes with available pods: 2 Aug 27 13:41:37.831: INFO: Number of running nodes: 2, number of available pods: 2 Aug 27 13:41:37.831: INFO: Update the DaemonSet to trigger a rollout Aug 27 13:41:37.841: INFO: Updating DaemonSet daemon-set Aug 27 13:41:45.859: INFO: Roll back the DaemonSet before rollout is complete Aug 27 13:41:45.869: INFO: Updating DaemonSet daemon-set Aug 27 13:41:45.869: INFO: Make sure DaemonSet rollback is complete Aug 27 13:41:45.873: INFO: Wrong image for pod: daemon-set-tv488. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 27 13:41:45.873: INFO: Pod daemon-set-tv488 is not available Aug 27 13:41:45.877: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:41:46.887: INFO: Wrong image for pod: daemon-set-tv488. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 27 13:41:46.887: INFO: Pod daemon-set-tv488 is not available Aug 27 13:41:46.892: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:41:47.883: INFO: Wrong image for pod: daemon-set-tv488. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 27 13:41:47.883: INFO: Pod daemon-set-tv488 is not available Aug 27 13:41:47.889: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:41:48.884: INFO: Pod daemon-set-lwh79 is not available Aug 27 13:41:48.889: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6815, will wait for the garbage collector to delete the pods Aug 27 13:41:48.961: INFO: Deleting DaemonSet.extensions daemon-set took: 10.30595ms Aug 27 13:41:49.461: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.286955ms Aug 27 13:41:55.665: INFO: Number of nodes with available pods: 0 Aug 27 13:41:55.665: INFO: Number of running nodes: 0, number of available pods: 0 Aug 27 13:41:55.668: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"46153"},"items":null} Aug 27 13:41:55.671: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"46153"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 13:41:55.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6815" for this suite. • [SLOW TEST:19.942 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":18,"completed":9,"skipped":2579,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 13:41:55.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Aug 27 13:41:55.739: INFO: Waiting up to 1m0s for all nodes to be ready Aug 27 13:42:55.769: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create pods that use 2/3 of node resources. Aug 27 13:42:55.798: INFO: Created pod: pod0-sched-preemption-low-priority Aug 27 13:42:55.824: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 13:43:17.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1208" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:82.238 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":18,"completed":10,"skipped":2631,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 13:43:17.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Aug 27 13:43:17.981: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 27 13:43:17.990: INFO: Waiting for terminating namespaces to be deleted... Aug 27 13:43:17.993: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-5brzb before test Aug 27 13:43:18.003: INFO: astaire-58968c8b7f-2cfpc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:43:18.003: INFO: Container astaire ready: true, restart count 0 Aug 27 13:43:18.003: INFO: Container tailer ready: true, restart count 0 Aug 27 13:43:18.003: INFO: cassandra-5b9d7c8d97-mtg6p from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:43:18.003: INFO: Container cassandra ready: true, restart count 0 Aug 27 13:43:18.003: INFO: ellis-6d4bcd9976-wjzcr from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:43:18.003: INFO: Container ellis ready: true, restart count 0 Aug 27 13:43:18.003: INFO: homer-74f8c889f9-dp4pj from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:43:18.003: INFO: Container homer ready: true, restart count 0 Aug 27 13:43:18.003: INFO: homestead-f47c95f88-r5gtl from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:43:18.003: INFO: Container homestead ready: true, restart count 0 Aug 27 13:43:18.003: INFO: Container tailer ready: true, restart count 0 Aug 27 13:43:18.003: INFO: kindnet-b64vj from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 13:43:18.003: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 13:43:18.003: INFO: kube-proxy-6wb6p from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 13:43:18.003: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 13:43:18.003: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-tw45m before test Aug 27 13:43:18.012: INFO: bono-6957967566-mbkl6 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:43:18.012: INFO: Container bono ready: true, restart count 0 Aug 27 13:43:18.012: INFO: Container tailer ready: true, restart count 0 Aug 27 13:43:18.012: INFO: chronos-f6f76cf57-29d9g from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:43:18.012: INFO: Container chronos ready: true, restart count 0 Aug 27 13:43:18.012: INFO: Container tailer ready: true, restart count 0 Aug 27 13:43:18.012: INFO: etcd-744b4d9f98-wlr24 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:43:18.012: INFO: Container etcd ready: true, restart count 0 Aug 27 13:43:18.012: INFO: homestead-prov-77b78dd7f8-nz7qc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:43:18.012: INFO: Container homestead-prov ready: true, restart count 0 Aug 27 13:43:18.012: INFO: ralf-8597986d58-p7crz from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:43:18.012: INFO: Container ralf ready: true, restart count 0 Aug 27 13:43:18.012: INFO: Container tailer ready: true, restart count 0 Aug 27 13:43:18.012: INFO: sprout-58578d4fcd-89l45 from ims-ftg7f started at 2021-08-27 08:50:39 +0000 UTC (2 container statuses recorded) Aug 27 13:43:18.012: INFO: Container sprout ready: true, restart count 0 Aug 27 13:43:18.012: INFO: Container tailer ready: true, restart count 0 Aug 27 13:43:18.012: INFO: kindnet-fp7vq from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 13:43:18.012: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 13:43:18.012: INFO: kube-proxy-kg48d from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 13:43:18.012: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 13:43:18.012: INFO: pod1-sched-preemption-medium-priority from sched-preemption-1208 started at 2021-08-27 13:42:58 +0000 UTC (1 container statuses recorded) Aug 27 13:43:18.012: INFO: Container pod1-sched-preemption-medium-priority ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ed5239d9-ae63-4030-9a73-6d9a155f3eba 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-ed5239d9-ae63-4030-9a73-6d9a155f3eba off the node capi-leguer-md-0-555f949c67-5brzb STEP: verifying the node doesn't have the label kubernetes.io/e2e-ed5239d9-ae63-4030-9a73-6d9a155f3eba [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 13:43:22.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5041" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":18,"completed":11,"skipped":2995,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 13:43:22.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Aug 27 13:43:22.162: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Aug 27 13:43:22.171: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:43:22.174: INFO: Number of nodes with available pods: 0 Aug 27 13:43:22.175: INFO: Node capi-leguer-md-0-555f949c67-5brzb is running more than one daemon pod Aug 27 13:43:23.179: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:43:23.182: INFO: Number of nodes with available pods: 0 Aug 27 13:43:23.182: INFO: Node capi-leguer-md-0-555f949c67-5brzb is running more than one daemon pod Aug 27 13:43:24.186: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:43:24.197: INFO: Number of nodes with available pods: 2 Aug 27 13:43:24.197: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Aug 27 13:43:24.223: INFO: Wrong image for pod: daemon-set-2wvnt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:24.223: INFO: Wrong image for pod: daemon-set-b5nsl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:24.228: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:43:25.233: INFO: Wrong image for pod: daemon-set-2wvnt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:25.233: INFO: Wrong image for pod: daemon-set-b5nsl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:25.238: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:43:26.233: INFO: Wrong image for pod: daemon-set-2wvnt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:26.233: INFO: Wrong image for pod: daemon-set-b5nsl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:26.238: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:43:27.232: INFO: Wrong image for pod: daemon-set-2wvnt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:27.232: INFO: Wrong image for pod: daemon-set-b5nsl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:27.232: INFO: Pod daemon-set-b5nsl is not available Aug 27 13:43:27.237: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:43:28.233: INFO: Wrong image for pod: daemon-set-2wvnt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:28.233: INFO: Wrong image for pod: daemon-set-b5nsl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:28.233: INFO: Pod daemon-set-b5nsl is not available Aug 27 13:43:28.237: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:43:29.233: INFO: Wrong image for pod: daemon-set-2wvnt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:29.233: INFO: Wrong image for pod: daemon-set-b5nsl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:29.233: INFO: Pod daemon-set-b5nsl is not available Aug 27 13:43:29.238: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:43:30.233: INFO: Wrong image for pod: daemon-set-2wvnt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:30.233: INFO: Wrong image for pod: daemon-set-b5nsl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:30.233: INFO: Pod daemon-set-b5nsl is not available Aug 27 13:43:30.237: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:43:31.232: INFO: Wrong image for pod: daemon-set-2wvnt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:31.233: INFO: Wrong image for pod: daemon-set-b5nsl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:31.233: INFO: Pod daemon-set-b5nsl is not available Aug 27 13:43:31.237: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:43:32.234: INFO: Wrong image for pod: daemon-set-2wvnt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:32.234: INFO: Wrong image for pod: daemon-set-b5nsl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:32.234: INFO: Pod daemon-set-b5nsl is not available Aug 27 13:43:32.239: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:43:33.233: INFO: Wrong image for pod: daemon-set-2wvnt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:33.233: INFO: Wrong image for pod: daemon-set-b5nsl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:33.233: INFO: Pod daemon-set-b5nsl is not available Aug 27 13:43:33.238: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:43:34.233: INFO: Wrong image for pod: daemon-set-2wvnt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:34.233: INFO: Wrong image for pod: daemon-set-b5nsl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:34.233: INFO: Pod daemon-set-b5nsl is not available Aug 27 13:43:34.238: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:43:35.233: INFO: Wrong image for pod: daemon-set-2wvnt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:35.233: INFO: Wrong image for pod: daemon-set-b5nsl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:35.233: INFO: Pod daemon-set-b5nsl is not available Aug 27 13:43:35.238: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:43:36.233: INFO: Wrong image for pod: daemon-set-2wvnt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:36.233: INFO: Pod daemon-set-bx98x is not available Aug 27 13:43:36.238: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:43:37.233: INFO: Wrong image for pod: daemon-set-2wvnt. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Aug 27 13:43:37.233: INFO: Pod daemon-set-2wvnt is not available Aug 27 13:43:37.239: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:43:38.233: INFO: Pod daemon-set-2wfln is not available Aug 27 13:43:38.238: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Aug 27 13:43:38.243: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:43:38.247: INFO: Number of nodes with available pods: 1 Aug 27 13:43:38.247: INFO: Node capi-leguer-md-0-555f949c67-tw45m is running more than one daemon pod Aug 27 13:43:39.253: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:43:39.256: INFO: Number of nodes with available pods: 2 Aug 27 13:43:39.256: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9555, will wait for the garbage collector to delete the pods Aug 27 13:43:39.333: INFO: Deleting DaemonSet.extensions daemon-set took: 5.810803ms Aug 27 13:43:39.833: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.277889ms Aug 27 13:43:45.636: INFO: Number of nodes with available pods: 0 Aug 27 13:43:45.636: INFO: Number of running nodes: 0, number of available pods: 0 Aug 27 13:43:45.639: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"46548"},"items":null} Aug 27 13:43:45.647: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"46548"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 13:43:45.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9555" for this suite. • [SLOW TEST:23.570 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":18,"completed":12,"skipped":3017,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 13:43:45.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Aug 27 13:43:45.709: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 27 13:43:45.718: INFO: Waiting for terminating namespaces to be deleted... Aug 27 13:43:45.721: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-5brzb before test Aug 27 13:43:45.729: INFO: astaire-58968c8b7f-2cfpc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:43:45.729: INFO: Container astaire ready: true, restart count 0 Aug 27 13:43:45.729: INFO: Container tailer ready: true, restart count 0 Aug 27 13:43:45.729: INFO: cassandra-5b9d7c8d97-mtg6p from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:43:45.729: INFO: Container cassandra ready: true, restart count 0 Aug 27 13:43:45.729: INFO: ellis-6d4bcd9976-wjzcr from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:43:45.729: INFO: Container ellis ready: true, restart count 0 Aug 27 13:43:45.729: INFO: homer-74f8c889f9-dp4pj from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:43:45.729: INFO: Container homer ready: true, restart count 0 Aug 27 13:43:45.729: INFO: homestead-f47c95f88-r5gtl from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:43:45.729: INFO: Container homestead ready: true, restart count 0 Aug 27 13:43:45.729: INFO: Container tailer ready: true, restart count 0 Aug 27 13:43:45.729: INFO: kindnet-b64vj from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 13:43:45.729: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 13:43:45.729: INFO: kube-proxy-6wb6p from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 13:43:45.729: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 13:43:45.729: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-tw45m before test Aug 27 13:43:45.737: INFO: bono-6957967566-mbkl6 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:43:45.737: INFO: Container bono ready: true, restart count 0 Aug 27 13:43:45.737: INFO: Container tailer ready: true, restart count 0 Aug 27 13:43:45.737: INFO: chronos-f6f76cf57-29d9g from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:43:45.737: INFO: Container chronos ready: true, restart count 0 Aug 27 13:43:45.737: INFO: Container tailer ready: true, restart count 0 Aug 27 13:43:45.737: INFO: etcd-744b4d9f98-wlr24 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:43:45.737: INFO: Container etcd ready: true, restart count 0 Aug 27 13:43:45.737: INFO: homestead-prov-77b78dd7f8-nz7qc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:43:45.737: INFO: Container homestead-prov ready: true, restart count 0 Aug 27 13:43:45.737: INFO: ralf-8597986d58-p7crz from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:43:45.737: INFO: Container ralf ready: true, restart count 0 Aug 27 13:43:45.737: INFO: Container tailer ready: true, restart count 0 Aug 27 13:43:45.737: INFO: sprout-58578d4fcd-89l45 from ims-ftg7f started at 2021-08-27 08:50:39 +0000 UTC (2 container statuses recorded) Aug 27 13:43:45.737: INFO: Container sprout ready: true, restart count 0 Aug 27 13:43:45.737: INFO: Container tailer ready: true, restart count 0 Aug 27 13:43:45.737: INFO: kindnet-fp7vq from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 13:43:45.737: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 13:43:45.737: INFO: kube-proxy-kg48d from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 13:43:45.737: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.169f2db064b5e093], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 13:43:46.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3254" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":18,"completed":13,"skipped":3344,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 13:43:46.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Aug 27 13:43:46.825: INFO: Waiting up to 1m0s for all nodes to be ready Aug 27 13:44:46.857: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 13:44:46.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Aug 27 13:44:46.932: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Aug 27 13:44:46.937: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 13:44:46.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-162" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 13:44:46.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7570" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.247 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":18,"completed":14,"skipped":3731,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 13:44:47.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 27 13:44:47.158: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:44:47.161: INFO: Number of nodes with available pods: 0 Aug 27 13:44:47.161: INFO: Node capi-leguer-md-0-555f949c67-5brzb is running more than one daemon pod Aug 27 13:44:48.166: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:44:48.169: INFO: Number of nodes with available pods: 1 Aug 27 13:44:48.170: INFO: Node capi-leguer-md-0-555f949c67-5brzb is running more than one daemon pod Aug 27 13:44:49.166: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:44:49.170: INFO: Number of nodes with available pods: 2 Aug 27 13:44:49.170: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Aug 27 13:44:49.186: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:44:49.190: INFO: Number of nodes with available pods: 1 Aug 27 13:44:49.190: INFO: Node capi-leguer-md-0-555f949c67-5brzb is running more than one daemon pod Aug 27 13:44:50.195: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:44:50.200: INFO: Number of nodes with available pods: 1 Aug 27 13:44:50.200: INFO: Node capi-leguer-md-0-555f949c67-5brzb is running more than one daemon pod Aug 27 13:44:51.196: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:44:51.200: INFO: Number of nodes with available pods: 1 Aug 27 13:44:51.200: INFO: Node capi-leguer-md-0-555f949c67-5brzb is running more than one daemon pod Aug 27 13:44:52.196: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:44:52.200: INFO: Number of nodes with available pods: 1 Aug 27 13:44:52.200: INFO: Node capi-leguer-md-0-555f949c67-5brzb is running more than one daemon pod Aug 27 13:44:53.196: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:44:53.200: INFO: Number of nodes with available pods: 1 Aug 27 13:44:53.200: INFO: Node capi-leguer-md-0-555f949c67-5brzb is running more than one daemon pod Aug 27 13:44:54.196: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:44:54.200: INFO: Number of nodes with available pods: 1 Aug 27 13:44:54.200: INFO: Node capi-leguer-md-0-555f949c67-5brzb is running more than one daemon pod Aug 27 13:44:55.195: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:44:55.199: INFO: Number of nodes with available pods: 1 Aug 27 13:44:55.199: INFO: Node capi-leguer-md-0-555f949c67-5brzb is running more than one daemon pod Aug 27 13:44:56.197: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:44:56.201: INFO: Number of nodes with available pods: 1 Aug 27 13:44:56.201: INFO: Node capi-leguer-md-0-555f949c67-5brzb is running more than one daemon pod Aug 27 13:44:57.195: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:44:57.198: INFO: Number of nodes with available pods: 2 Aug 27 13:44:57.199: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-754, will wait for the garbage collector to delete the pods Aug 27 13:44:57.261: INFO: Deleting DaemonSet.extensions daemon-set took: 6.367309ms Aug 27 13:44:57.762: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.284412ms Aug 27 13:45:05.665: INFO: Number of nodes with available pods: 0 Aug 27 13:45:05.665: INFO: Number of running nodes: 0, number of available pods: 0 Aug 27 13:45:05.669: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"46810"},"items":null} Aug 27 13:45:05.672: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"46810"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 13:45:05.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-754" for this suite. • [SLOW TEST:18.665 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":18,"completed":15,"skipped":3864,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 13:45:05.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 27 13:45:05.768: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:45:05.771: INFO: Number of nodes with available pods: 0 Aug 27 13:45:05.771: INFO: Node capi-leguer-md-0-555f949c67-5brzb is running more than one daemon pod Aug 27 13:45:06.778: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:45:06.782: INFO: Number of nodes with available pods: 0 Aug 27 13:45:06.782: INFO: Node capi-leguer-md-0-555f949c67-5brzb is running more than one daemon pod Aug 27 13:45:07.778: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:45:07.783: INFO: Number of nodes with available pods: 2 Aug 27 13:45:07.783: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Aug 27 13:45:07.804: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:45:07.808: INFO: Number of nodes with available pods: 1 Aug 27 13:45:07.808: INFO: Node capi-leguer-md-0-555f949c67-tw45m is running more than one daemon pod Aug 27 13:45:08.813: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:45:08.817: INFO: Number of nodes with available pods: 1 Aug 27 13:45:08.817: INFO: Node capi-leguer-md-0-555f949c67-tw45m is running more than one daemon pod Aug 27 13:45:09.815: INFO: DaemonSet pods can't tolerate node capi-leguer-control-plane-mt48s with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 27 13:45:09.820: INFO: Number of nodes with available pods: 2 Aug 27 13:45:09.820: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2072, will wait for the garbage collector to delete the pods Aug 27 13:45:09.888: INFO: Deleting DaemonSet.extensions daemon-set took: 7.463263ms Aug 27 13:45:10.388: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.300515ms Aug 27 13:45:15.691: INFO: Number of nodes with available pods: 0 Aug 27 13:45:15.691: INFO: Number of running nodes: 0, number of available pods: 0 Aug 27 13:45:15.695: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"46908"},"items":null} Aug 27 13:45:15.698: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"46908"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 13:45:15.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2072" for this suite. • [SLOW TEST:10.027 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":18,"completed":16,"skipped":3923,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 13:45:15.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Aug 27 13:45:15.769: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 27 13:45:15.778: INFO: Waiting for terminating namespaces to be deleted... Aug 27 13:45:15.781: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-5brzb before test Aug 27 13:45:15.788: INFO: astaire-58968c8b7f-2cfpc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:45:15.788: INFO: Container astaire ready: true, restart count 0 Aug 27 13:45:15.788: INFO: Container tailer ready: true, restart count 0 Aug 27 13:45:15.788: INFO: cassandra-5b9d7c8d97-mtg6p from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:45:15.788: INFO: Container cassandra ready: true, restart count 0 Aug 27 13:45:15.788: INFO: ellis-6d4bcd9976-wjzcr from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:45:15.788: INFO: Container ellis ready: true, restart count 0 Aug 27 13:45:15.788: INFO: homer-74f8c889f9-dp4pj from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:45:15.788: INFO: Container homer ready: true, restart count 0 Aug 27 13:45:15.788: INFO: homestead-f47c95f88-r5gtl from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:45:15.788: INFO: Container homestead ready: true, restart count 0 Aug 27 13:45:15.788: INFO: Container tailer ready: true, restart count 0 Aug 27 13:45:15.788: INFO: kindnet-b64vj from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 13:45:15.788: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 13:45:15.788: INFO: kube-proxy-6wb6p from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 13:45:15.788: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 13:45:15.788: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-tw45m before test Aug 27 13:45:15.795: INFO: bono-6957967566-mbkl6 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:45:15.795: INFO: Container bono ready: true, restart count 0 Aug 27 13:45:15.795: INFO: Container tailer ready: true, restart count 0 Aug 27 13:45:15.795: INFO: chronos-f6f76cf57-29d9g from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:45:15.795: INFO: Container chronos ready: true, restart count 0 Aug 27 13:45:15.795: INFO: Container tailer ready: true, restart count 0 Aug 27 13:45:15.795: INFO: etcd-744b4d9f98-wlr24 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:45:15.795: INFO: Container etcd ready: true, restart count 0 Aug 27 13:45:15.795: INFO: homestead-prov-77b78dd7f8-nz7qc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 13:45:15.795: INFO: Container homestead-prov ready: true, restart count 0 Aug 27 13:45:15.795: INFO: ralf-8597986d58-p7crz from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 13:45:15.795: INFO: Container ralf ready: true, restart count 0 Aug 27 13:45:15.795: INFO: Container tailer ready: true, restart count 0 Aug 27 13:45:15.795: INFO: sprout-58578d4fcd-89l45 from ims-ftg7f started at 2021-08-27 08:50:39 +0000 UTC (2 container statuses recorded) Aug 27 13:45:15.795: INFO: Container sprout ready: true, restart count 0 Aug 27 13:45:15.795: INFO: Container tailer ready: true, restart count 0 Aug 27 13:45:15.795: INFO: kindnet-fp7vq from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 13:45:15.795: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 13:45:15.795: INFO: kube-proxy-kg48d from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 13:45:15.795: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a1955cb2-a859-475c-9fed-50533c72318a 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.23.0.8 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.23.0.8 but use UDP protocol on the node which pod2 resides STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Aug 27 13:45:25.901: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.23.0.8 http://127.0.0.1:54321/hostname] Namespace:sched-pred-6830 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Aug 27 13:45:25.901: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.23.0.8, port: 54321 Aug 27 13:45:26.049: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.23.0.8:54321/hostname] Namespace:sched-pred-6830 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Aug 27 13:45:26.049: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.23.0.8, port: 54321 UDP Aug 27 13:45:26.179: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.23.0.8 54321] Namespace:sched-pred-6830 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Aug 27 13:45:26.179: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Aug 27 13:45:31.301: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.23.0.8 http://127.0.0.1:54321/hostname] Namespace:sched-pred-6830 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Aug 27 13:45:31.301: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.23.0.8, port: 54321 Aug 27 13:45:31.433: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.23.0.8:54321/hostname] Namespace:sched-pred-6830 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Aug 27 13:45:31.433: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.23.0.8, port: 54321 UDP Aug 27 13:45:31.547: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.23.0.8 54321] Namespace:sched-pred-6830 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Aug 27 13:45:31.547: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Aug 27 13:45:36.637: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.23.0.8 http://127.0.0.1:54321/hostname] Namespace:sched-pred-6830 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Aug 27 13:45:36.637: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.23.0.8, port: 54321 Aug 27 13:45:36.763: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.23.0.8:54321/hostname] Namespace:sched-pred-6830 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Aug 27 13:45:36.763: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.23.0.8, port: 54321 UDP Aug 27 13:45:36.888: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.23.0.8 54321] Namespace:sched-pred-6830 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Aug 27 13:45:36.888: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Aug 27 13:45:42.001: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.23.0.8 http://127.0.0.1:54321/hostname] Namespace:sched-pred-6830 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Aug 27 13:45:42.001: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.23.0.8, port: 54321 Aug 27 13:45:42.124: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.23.0.8:54321/hostname] Namespace:sched-pred-6830 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Aug 27 13:45:42.124: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.23.0.8, port: 54321 UDP Aug 27 13:45:42.283: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.23.0.8 54321] Namespace:sched-pred-6830 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Aug 27 13:45:42.284: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Aug 27 13:45:47.398: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.23.0.8 http://127.0.0.1:54321/hostname] Namespace:sched-pred-6830 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Aug 27 13:45:47.398: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.23.0.8, port: 54321 Aug 27 13:45:47.504: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.23.0.8:54321/hostname] Namespace:sched-pred-6830 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Aug 27 13:45:47.504: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.23.0.8, port: 54321 UDP Aug 27 13:45:47.629: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.23.0.8 54321] Namespace:sched-pred-6830 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Aug 27 13:45:47.629: INFO: >>> kubeConfig: /root/.kube/config STEP: removing the label kubernetes.io/e2e-a1955cb2-a859-475c-9fed-50533c72318a off the node capi-leguer-md-0-555f949c67-5brzb STEP: verifying the node doesn't have the label kubernetes.io/e2e-a1955cb2-a859-475c-9fed-50533c72318a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 13:45:52.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6830" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:37.045 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":18,"completed":17,"skipped":4074,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 13:45:52.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 13:45:58.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3305" for this suite. STEP: Destroying namespace "nsdeletetest-1832" for this suite. Aug 27 13:45:58.928: INFO: Namespace nsdeletetest-1832 was already deleted STEP: Destroying namespace "nsdeletetest-9425" for this suite. • [SLOW TEST:6.144 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":18,"completed":18,"skipped":5526,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug 27 13:45:58.934: INFO: Running AfterSuite actions on all nodes Aug 27 13:45:58.934: INFO: Running AfterSuite actions on node 1 Aug 27 13:45:58.934: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":18,"completed":18,"skipped":5650,"failed":0} Ran 18 of 5668 Specs in 836.579 seconds SUCCESS! -- 18 Passed | 0 Failed | 0 Pending | 5650 Skipped PASS Ginkgo ran 1 suite in 13m58.172915496s Test Suite Passed