I0411 16:47:06.568755 16 e2e.go:126] Starting e2e run "e1644d4e-fd15-48f5-aac7-e0bc44868a17" on Ginkgo node 1 Apr 11 16:47:06.584: INFO: Enabling in-tree volume drivers Running Suite: Kubernetes e2e suite - /usr/local/bin ==================================================== Random Seed: 1712854026 - will randomize all specs Will run 23 of 7069 specs ------------------------------ [SynchronizedBeforeSuite] test/e2e/e2e.go:77 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 11 16:47:06.769: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 16:47:06.771: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 11 16:47:06.797: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 11 16:47:06.828: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 11 16:47:06.828: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 11 16:47:06.828: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 11 16:47:06.833: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Apr 11 16:47:06.833: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 11 16:47:06.833: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 11 16:47:06.833: INFO: e2e test version: v1.26.13 Apr 11 16:47:06.835: INFO: kube-apiserver version: v1.26.6 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 11 16:47:06.835: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 16:47:06.840: INFO: Cluster IP family: ipv4 ------------------------------ [SynchronizedBeforeSuite] PASSED [0.072 seconds] [SynchronizedBeforeSuite] test/e2e/e2e.go:77 Begin Captured GinkgoWriter Output >> [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 11 16:47:06.769: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 16:47:06.771: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 11 16:47:06.797: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 11 16:47:06.828: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 11 16:47:06.828: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 11 16:47:06.828: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 11 16:47:06.833: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Apr 11 16:47:06.833: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 11 16:47:06.833: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 11 16:47:06.833: INFO: e2e test version: v1.26.13 Apr 11 16:47:06.835: INFO: kube-apiserver version: v1.26.6 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 11 16:47:06.835: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 16:47:06.840: INFO: Cluster IP family: ipv4 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] test/e2e/scheduling/predicates.go:466 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:47:06.891 Apr 11 16:47:06.891: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 16:47:06.892 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:47:06.904 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:47:06.908 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 16:47:06.912: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 16:47:06.921: INFO: Waiting for terminating namespaces to be deleted... Apr 11 16:47:06.924: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 16:47:06.931: INFO: concurrent-28547566-b52kd from cronjob-7568 started at 2024-04-11 16:46:00 +0000 UTC (1 container statuses recorded) Apr 11 16:47:06.931: INFO: Container c ready: true, restart count 0 Apr 11 16:47:06.931: INFO: concurrent-28547567-jzpkm from cronjob-7568 started at 2024-04-11 16:47:00 +0000 UTC (1 container statuses recorded) Apr 11 16:47:06.931: INFO: Container c ready: true, restart count 0 Apr 11 16:47:06.931: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 16:47:06.931: INFO: Container loopdev ready: true, restart count 0 Apr 11 16:47:06.931: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 16:47:06.931: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 16:47:06.931: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 16:47:06.931: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] test/e2e/scheduling/predicates.go:466 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/11/24 16:47:06.931 Apr 11 16:47:06.939: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-3300" to be "running" Apr 11 16:47:06.942: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.020606ms Apr 11 16:47:08.947: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007864049s Apr 11 16:47:08.947: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/11/24 16:47:08.95 STEP: Trying to apply a random label on the found node. 04/11/24 16:47:08.959 STEP: verifying the node has the label kubernetes.io/e2e-016e15c8-e3f5-49e5-ba36-0c08dbc5a1e1 42 04/11/24 16:47:08.973 STEP: Trying to relaunch the pod, now with labels. 04/11/24 16:47:08.977 Apr 11 16:47:08.982: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-3300" to be "not pending" Apr 11 16:47:08.985: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 3.165567ms Apr 11 16:47:10.993: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 2.011044104s Apr 11 16:47:10.993: INFO: Pod "with-labels" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-016e15c8-e3f5-49e5-ba36-0c08dbc5a1e1 off the node v126-worker2 04/11/24 16:47:10.996 STEP: verifying the node doesn't have the label kubernetes.io/e2e-016e15c8-e3f5-49e5-ba36-0c08dbc5a1e1 04/11/24 16:47:11.014 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:47:11.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-3300" for this suite. 04/11/24 16:47:11.023 ------------------------------ • [4.137 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] test/e2e/scheduling/predicates.go:466 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:47:06.891 Apr 11 16:47:06.891: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 16:47:06.892 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:47:06.904 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:47:06.908 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 16:47:06.912: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 16:47:06.921: INFO: Waiting for terminating namespaces to be deleted... Apr 11 16:47:06.924: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 16:47:06.931: INFO: concurrent-28547566-b52kd from cronjob-7568 started at 2024-04-11 16:46:00 +0000 UTC (1 container statuses recorded) Apr 11 16:47:06.931: INFO: Container c ready: true, restart count 0 Apr 11 16:47:06.931: INFO: concurrent-28547567-jzpkm from cronjob-7568 started at 2024-04-11 16:47:00 +0000 UTC (1 container statuses recorded) Apr 11 16:47:06.931: INFO: Container c ready: true, restart count 0 Apr 11 16:47:06.931: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 16:47:06.931: INFO: Container loopdev ready: true, restart count 0 Apr 11 16:47:06.931: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 16:47:06.931: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 16:47:06.931: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 16:47:06.931: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] test/e2e/scheduling/predicates.go:466 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/11/24 16:47:06.931 Apr 11 16:47:06.939: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-3300" to be "running" Apr 11 16:47:06.942: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.020606ms Apr 11 16:47:08.947: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007864049s Apr 11 16:47:08.947: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/11/24 16:47:08.95 STEP: Trying to apply a random label on the found node. 04/11/24 16:47:08.959 STEP: verifying the node has the label kubernetes.io/e2e-016e15c8-e3f5-49e5-ba36-0c08dbc5a1e1 42 04/11/24 16:47:08.973 STEP: Trying to relaunch the pod, now with labels. 04/11/24 16:47:08.977 Apr 11 16:47:08.982: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-3300" to be "not pending" Apr 11 16:47:08.985: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 3.165567ms Apr 11 16:47:10.993: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 2.011044104s Apr 11 16:47:10.993: INFO: Pod "with-labels" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-016e15c8-e3f5-49e5-ba36-0c08dbc5a1e1 off the node v126-worker2 04/11/24 16:47:10.996 STEP: verifying the node doesn't have the label kubernetes.io/e2e-016e15c8-e3f5-49e5-ba36-0c08dbc5a1e1 04/11/24 16:47:11.014 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:47:11.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-3300" for this suite. 04/11/24 16:47:11.023 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance] test/e2e/apimachinery/namespace.go:299 [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:47:11.04 Apr 11 16:47:11.040: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/11/24 16:47:11.042 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:47:11.053 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:47:11.056 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should apply changes to a namespace status [Conformance] test/e2e/apimachinery/namespace.go:299 STEP: Read namespace status 04/11/24 16:47:11.06 Apr 11 16:47:11.064: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} STEP: Patch namespace status 04/11/24 16:47:11.064 Apr 11 16:47:11.069: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} STEP: Update namespace status 04/11/24 16:47:11.069 Apr 11 16:47:11.077: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:47:11.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-3080" for this suite. 04/11/24 16:47:11.082 ------------------------------ • [0.047 seconds] [sig-api-machinery] Namespaces [Serial] test/e2e/apimachinery/framework.go:23 should apply changes to a namespace status [Conformance] test/e2e/apimachinery/namespace.go:299 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:47:11.04 Apr 11 16:47:11.040: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/11/24 16:47:11.042 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:47:11.053 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:47:11.056 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should apply changes to a namespace status [Conformance] test/e2e/apimachinery/namespace.go:299 STEP: Read namespace status 04/11/24 16:47:11.06 Apr 11 16:47:11.064: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} STEP: Patch namespace status 04/11/24 16:47:11.064 Apr 11 16:47:11.069: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} STEP: Update namespace status 04/11/24 16:47:11.069 Apr 11 16:47:11.077: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:47:11.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-3080" for this suite. 04/11/24 16:47:11.082 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/scheduling/preemption.go:814 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:47:11.092 Apr 11 16:47:11.092: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/11/24 16:47:11.094 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:47:11.104 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:47:11.107 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 11 16:47:11.121: INFO: Waiting up to 1m0s for all nodes to be ready Apr 11 16:48:11.150: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:48:11.153 Apr 11 16:48:11.154: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption-path 04/11/24 16:48:11.155 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:48:11.167 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:48:11.172 [BeforeEach] PriorityClass endpoints test/e2e/framework/metrics/init/init.go:31 [BeforeEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:771 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/scheduling/preemption.go:814 Apr 11 16:48:11.189: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. Apr 11 16:48:11.192: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints test/e2e/framework/node/init/init.go:32 Apr 11 16:48:11.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:787 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:48:11.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] PriorityClass endpoints test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] PriorityClass endpoints dump namespaces | framework.go:196 [DeferCleanup (Each)] PriorityClass endpoints tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-path-9765" for this suite. 04/11/24 16:48:11.258 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-8582" for this suite. 04/11/24 16:48:11.263 ------------------------------ • [SLOW TEST] [60.175 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 PriorityClass endpoints test/e2e/scheduling/preemption.go:764 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/scheduling/preemption.go:814 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:47:11.092 Apr 11 16:47:11.092: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/11/24 16:47:11.094 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:47:11.104 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:47:11.107 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 11 16:47:11.121: INFO: Waiting up to 1m0s for all nodes to be ready Apr 11 16:48:11.150: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:48:11.153 Apr 11 16:48:11.154: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption-path 04/11/24 16:48:11.155 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:48:11.167 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:48:11.172 [BeforeEach] PriorityClass endpoints test/e2e/framework/metrics/init/init.go:31 [BeforeEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:771 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/scheduling/preemption.go:814 Apr 11 16:48:11.189: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. Apr 11 16:48:11.192: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints test/e2e/framework/node/init/init.go:32 Apr 11 16:48:11.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:787 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:48:11.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] PriorityClass endpoints test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] PriorityClass endpoints dump namespaces | framework.go:196 [DeferCleanup (Each)] PriorityClass endpoints tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-path-9765" for this suite. 04/11/24 16:48:11.258 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-8582" for this suite. 04/11/24 16:48:11.263 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] test/e2e/scheduling/preemption.go:224 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:48:11.268 Apr 11 16:48:11.269: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/11/24 16:48:11.27 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:48:11.281 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:48:11.285 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 11 16:48:11.300: INFO: Waiting up to 1m0s for all nodes to be ready Apr 11 16:49:11.327: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] test/e2e/scheduling/preemption.go:224 STEP: Create pods that use 4/5 of node resources. 04/11/24 16:49:11.331 Apr 11 16:49:11.357: INFO: Created pod: pod0-0-sched-preemption-low-priority Apr 11 16:49:11.363: INFO: Created pod: pod0-1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. 04/11/24 16:49:11.363 Apr 11 16:49:11.363: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-6227" to be "running" Apr 11 16:49:11.366: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 3.141124ms Apr 11 16:49:13.371: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.008129593s Apr 11 16:49:13.371: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Apr 11 16:49:13.371: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-6227" to be "running" Apr 11 16:49:13.375: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 3.535951ms Apr 11 16:49:13.375: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" STEP: Run a critical pod that use same resources as that of a lower priority pod 04/11/24 16:49:13.375 Apr 11 16:49:13.386: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" Apr 11 16:49:13.389: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.173919ms Apr 11 16:49:15.393: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007340616s Apr 11 16:49:17.394: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007780041s Apr 11 16:49:19.395: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.008738109s Apr 11 16:49:19.395: INFO: Pod "critical-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:49:19.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-6227" for this suite. 04/11/24 16:49:19.451 ------------------------------ • [SLOW TEST] [68.188 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] test/e2e/scheduling/preemption.go:224 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:48:11.268 Apr 11 16:48:11.269: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/11/24 16:48:11.27 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:48:11.281 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:48:11.285 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 11 16:48:11.300: INFO: Waiting up to 1m0s for all nodes to be ready Apr 11 16:49:11.327: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] test/e2e/scheduling/preemption.go:224 STEP: Create pods that use 4/5 of node resources. 04/11/24 16:49:11.331 Apr 11 16:49:11.357: INFO: Created pod: pod0-0-sched-preemption-low-priority Apr 11 16:49:11.363: INFO: Created pod: pod0-1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. 04/11/24 16:49:11.363 Apr 11 16:49:11.363: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-6227" to be "running" Apr 11 16:49:11.366: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 3.141124ms Apr 11 16:49:13.371: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.008129593s Apr 11 16:49:13.371: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Apr 11 16:49:13.371: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-6227" to be "running" Apr 11 16:49:13.375: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 3.535951ms Apr 11 16:49:13.375: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" STEP: Run a critical pod that use same resources as that of a lower priority pod 04/11/24 16:49:13.375 Apr 11 16:49:13.386: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" Apr 11 16:49:13.389: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.173919ms Apr 11 16:49:15.393: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007340616s Apr 11 16:49:17.394: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007780041s Apr 11 16:49:19.395: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.008738109s Apr 11 16:49:19.395: INFO: Pod "critical-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:49:19.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-6227" for this suite. 04/11/24 16:49:19.451 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance] test/e2e/apps/daemon_set.go:873 [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:49:19.468 Apr 11 16:49:19.469: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/11/24 16:49:19.471 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:49:19.481 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:49:19.485 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should verify changes to a daemon set status [Conformance] test/e2e/apps/daemon_set.go:873 STEP: Creating simple DaemonSet "daemon-set" 04/11/24 16:49:19.505 STEP: Check that daemon pods launch on every node of the cluster. 04/11/24 16:49:19.511 Apr 11 16:49:19.516: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:49:19.519: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:49:19.519: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:49:20.524: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:49:20.528: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:49:20.528: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:49:21.525: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:49:21.529: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 11 16:49:21.529: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Getting /status 04/11/24 16:49:21.532 Apr 11 16:49:21.541: INFO: Daemon Set daemon-set has Conditions: [] STEP: updating the DaemonSet Status 04/11/24 16:49:21.541 Apr 11 16:49:21.551: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the daemon set status to be updated 04/11/24 16:49:21.551 Apr 11 16:49:21.553: INFO: Observed &DaemonSet event: ADDED Apr 11 16:49:21.554: INFO: Observed &DaemonSet event: MODIFIED Apr 11 16:49:21.554: INFO: Observed &DaemonSet event: MODIFIED Apr 11 16:49:21.554: INFO: Observed &DaemonSet event: MODIFIED Apr 11 16:49:21.554: INFO: Found daemon set daemon-set in namespace daemonsets-8540 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Apr 11 16:49:21.554: INFO: Daemon set daemon-set has an updated status STEP: patching the DaemonSet Status 04/11/24 16:49:21.554 STEP: watching for the daemon set status to be patched 04/11/24 16:49:21.564 Apr 11 16:49:21.566: INFO: Observed &DaemonSet event: ADDED Apr 11 16:49:21.567: INFO: Observed &DaemonSet event: MODIFIED Apr 11 16:49:21.567: INFO: Observed &DaemonSet event: MODIFIED Apr 11 16:49:21.567: INFO: Observed &DaemonSet event: MODIFIED Apr 11 16:49:21.567: INFO: Observed daemon set daemon-set in namespace daemonsets-8540 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Apr 11 16:49:21.567: INFO: Observed &DaemonSet event: MODIFIED Apr 11 16:49:21.567: INFO: Found daemon set daemon-set in namespace daemonsets-8540 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] Apr 11 16:49:21.567: INFO: Daemon set daemon-set has a patched status [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/11/24 16:49:21.57 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8540, will wait for the garbage collector to delete the pods 04/11/24 16:49:21.571 Apr 11 16:49:21.629: INFO: Deleting DaemonSet.extensions daemon-set took: 5.081272ms Apr 11 16:49:21.729: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.159392ms Apr 11 16:49:24.033: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:49:24.033: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 11 16:49:24.037: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"7509577"},"items":null} Apr 11 16:49:24.040: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"7509577"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:49:24.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-8540" for this suite. 04/11/24 16:49:24.054 ------------------------------ • [4.591 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should verify changes to a daemon set status [Conformance] test/e2e/apps/daemon_set.go:873 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:49:19.468 Apr 11 16:49:19.469: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/11/24 16:49:19.471 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:49:19.481 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:49:19.485 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should verify changes to a daemon set status [Conformance] test/e2e/apps/daemon_set.go:873 STEP: Creating simple DaemonSet "daemon-set" 04/11/24 16:49:19.505 STEP: Check that daemon pods launch on every node of the cluster. 04/11/24 16:49:19.511 Apr 11 16:49:19.516: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:49:19.519: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:49:19.519: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:49:20.524: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:49:20.528: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:49:20.528: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:49:21.525: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:49:21.529: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 11 16:49:21.529: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Getting /status 04/11/24 16:49:21.532 Apr 11 16:49:21.541: INFO: Daemon Set daemon-set has Conditions: [] STEP: updating the DaemonSet Status 04/11/24 16:49:21.541 Apr 11 16:49:21.551: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the daemon set status to be updated 04/11/24 16:49:21.551 Apr 11 16:49:21.553: INFO: Observed &DaemonSet event: ADDED Apr 11 16:49:21.554: INFO: Observed &DaemonSet event: MODIFIED Apr 11 16:49:21.554: INFO: Observed &DaemonSet event: MODIFIED Apr 11 16:49:21.554: INFO: Observed &DaemonSet event: MODIFIED Apr 11 16:49:21.554: INFO: Found daemon set daemon-set in namespace daemonsets-8540 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Apr 11 16:49:21.554: INFO: Daemon set daemon-set has an updated status STEP: patching the DaemonSet Status 04/11/24 16:49:21.554 STEP: watching for the daemon set status to be patched 04/11/24 16:49:21.564 Apr 11 16:49:21.566: INFO: Observed &DaemonSet event: ADDED Apr 11 16:49:21.567: INFO: Observed &DaemonSet event: MODIFIED Apr 11 16:49:21.567: INFO: Observed &DaemonSet event: MODIFIED Apr 11 16:49:21.567: INFO: Observed &DaemonSet event: MODIFIED Apr 11 16:49:21.567: INFO: Observed daemon set daemon-set in namespace daemonsets-8540 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Apr 11 16:49:21.567: INFO: Observed &DaemonSet event: MODIFIED Apr 11 16:49:21.567: INFO: Found daemon set daemon-set in namespace daemonsets-8540 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] Apr 11 16:49:21.567: INFO: Daemon set daemon-set has a patched status [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/11/24 16:49:21.57 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8540, will wait for the garbage collector to delete the pods 04/11/24 16:49:21.571 Apr 11 16:49:21.629: INFO: Deleting DaemonSet.extensions daemon-set took: 5.081272ms Apr 11 16:49:21.729: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.159392ms Apr 11 16:49:24.033: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:49:24.033: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 11 16:49:24.037: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"7509577"},"items":null} Apr 11 16:49:24.040: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"7509577"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:49:24.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-8540" for this suite. 04/11/24 16:49:24.054 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:251 [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:49:24.103 Apr 11 16:49:24.103: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/11/24 16:49:24.105 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:49:24.116 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:49:24.12 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:251 STEP: Creating a test namespace 04/11/24 16:49:24.125 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:49:24.136 STEP: Creating a service in the namespace 04/11/24 16:49:24.14 STEP: Deleting the namespace 04/11/24 16:49:24.148 STEP: Waiting for the namespace to be removed. 04/11/24 16:49:24.153 STEP: Recreating the namespace 04/11/24 16:49:30.157 STEP: Verifying there is no service in the namespace 04/11/24 16:49:30.169 [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:49:30.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-468" for this suite. 04/11/24 16:49:30.177 STEP: Destroying namespace "nsdeletetest-5722" for this suite. 04/11/24 16:49:30.182 Apr 11 16:49:30.185: INFO: Namespace nsdeletetest-5722 was already deleted STEP: Destroying namespace "nsdeletetest-6352" for this suite. 04/11/24 16:49:30.185 ------------------------------ • [SLOW TEST] [6.087 seconds] [sig-api-machinery] Namespaces [Serial] test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:251 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:49:24.103 Apr 11 16:49:24.103: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/11/24 16:49:24.105 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:49:24.116 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:49:24.12 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:251 STEP: Creating a test namespace 04/11/24 16:49:24.125 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:49:24.136 STEP: Creating a service in the namespace 04/11/24 16:49:24.14 STEP: Deleting the namespace 04/11/24 16:49:24.148 STEP: Waiting for the namespace to be removed. 04/11/24 16:49:24.153 STEP: Recreating the namespace 04/11/24 16:49:30.157 STEP: Verifying there is no service in the namespace 04/11/24 16:49:30.169 [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:49:30.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-468" for this suite. 04/11/24 16:49:30.177 STEP: Destroying namespace "nsdeletetest-5722" for this suite. 04/11/24 16:49:30.182 Apr 11 16:49:30.185: INFO: Namespace nsdeletetest-5722 was already deleted STEP: Destroying namespace "nsdeletetest-6352" for this suite. 04/11/24 16:49:30.185 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] test/e2e/scheduling/preemption.go:624 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:49:30.248 Apr 11 16:49:30.248: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/11/24 16:49:30.249 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:49:30.26 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:49:30.263 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 11 16:49:30.280: INFO: Waiting up to 1m0s for all nodes to be ready Apr 11 16:50:30.306: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:50:30.31 Apr 11 16:50:30.310: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption-path 04/11/24 16:50:30.312 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:50:30.324 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:50:30.327 [BeforeEach] PreemptionExecutionPath test/e2e/framework/metrics/init/init.go:31 [BeforeEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:576 STEP: Finding an available node 04/11/24 16:50:30.331 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/11/24 16:50:30.331 Apr 11 16:50:30.339: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-path-2130" to be "running" Apr 11 16:50:30.343: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.048701ms Apr 11 16:50:32.347: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007457339s Apr 11 16:50:32.347: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/11/24 16:50:32.35 Apr 11 16:50:32.360: INFO: found a healthy node: v126-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] test/e2e/scheduling/preemption.go:624 Apr 11 16:50:38.434: INFO: pods created so far: [1 1 1] Apr 11 16:50:38.434: INFO: length of pods created so far: 3 Apr 11 16:50:40.443: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath test/e2e/framework/node/init/init.go:32 Apr 11 16:50:47.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:549 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:50:47.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] PreemptionExecutionPath test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] PreemptionExecutionPath dump namespaces | framework.go:196 [DeferCleanup (Each)] PreemptionExecutionPath tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-path-2130" for this suite. 04/11/24 16:50:47.514 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-2139" for this suite. 04/11/24 16:50:47.519 ------------------------------ • [SLOW TEST] [77.276 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 PreemptionExecutionPath test/e2e/scheduling/preemption.go:537 runs ReplicaSets to verify preemption running path [Conformance] test/e2e/scheduling/preemption.go:624 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:49:30.248 Apr 11 16:49:30.248: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/11/24 16:49:30.249 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:49:30.26 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:49:30.263 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 11 16:49:30.280: INFO: Waiting up to 1m0s for all nodes to be ready Apr 11 16:50:30.306: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:50:30.31 Apr 11 16:50:30.310: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption-path 04/11/24 16:50:30.312 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:50:30.324 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:50:30.327 [BeforeEach] PreemptionExecutionPath test/e2e/framework/metrics/init/init.go:31 [BeforeEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:576 STEP: Finding an available node 04/11/24 16:50:30.331 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/11/24 16:50:30.331 Apr 11 16:50:30.339: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-path-2130" to be "running" Apr 11 16:50:30.343: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.048701ms Apr 11 16:50:32.347: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007457339s Apr 11 16:50:32.347: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/11/24 16:50:32.35 Apr 11 16:50:32.360: INFO: found a healthy node: v126-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] test/e2e/scheduling/preemption.go:624 Apr 11 16:50:38.434: INFO: pods created so far: [1 1 1] Apr 11 16:50:38.434: INFO: length of pods created so far: 3 Apr 11 16:50:40.443: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath test/e2e/framework/node/init/init.go:32 Apr 11 16:50:47.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:549 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:50:47.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] PreemptionExecutionPath test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] PreemptionExecutionPath dump namespaces | framework.go:196 [DeferCleanup (Each)] PreemptionExecutionPath tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-path-2130" for this suite. 04/11/24 16:50:47.514 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-2139" for this suite. 04/11/24 16:50:47.519 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] test/e2e/scheduling/predicates.go:443 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:50:47.539 Apr 11 16:50:47.539: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 16:50:47.541 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:50:47.552 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:50:47.556 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 16:50:47.561: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 16:50:47.569: INFO: Waiting for terminating namespaces to be deleted... Apr 11 16:50:47.572: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 16:50:47.579: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 16:50:47.579: INFO: Container loopdev ready: true, restart count 0 Apr 11 16:50:47.579: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 16:50:47.579: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 16:50:47.579: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 16:50:47.579: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 16:50:47.579: INFO: pod4 from sched-preemption-path-2130 started at 2024-04-11 16:50:39 +0000 UTC (1 container statuses recorded) Apr 11 16:50:47.579: INFO: Container pod4 ready: true, restart count 0 Apr 11 16:50:47.579: INFO: rs-pod3-q4hbm from sched-preemption-path-2130 started at 2024-04-11 16:50:36 +0000 UTC (1 container statuses recorded) Apr 11 16:50:47.579: INFO: Container pod3 ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] test/e2e/scheduling/predicates.go:443 STEP: Trying to schedule Pod with nonempty NodeSelector. 04/11/24 16:50:47.579 STEP: Considering event: Type = [Warning], Name = [restricted-pod.17c547de5bcc992a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1 node(s) were unschedulable. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 04/11/24 16:50:53.637 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:50:54.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-2737" for this suite. 04/11/24 16:50:54.641 ------------------------------ • [SLOW TEST] [7.107 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if not matching [Conformance] test/e2e/scheduling/predicates.go:443 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:50:47.539 Apr 11 16:50:47.539: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 16:50:47.541 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:50:47.552 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:50:47.556 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 16:50:47.561: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 16:50:47.569: INFO: Waiting for terminating namespaces to be deleted... Apr 11 16:50:47.572: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 16:50:47.579: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 16:50:47.579: INFO: Container loopdev ready: true, restart count 0 Apr 11 16:50:47.579: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 16:50:47.579: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 16:50:47.579: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 16:50:47.579: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 16:50:47.579: INFO: pod4 from sched-preemption-path-2130 started at 2024-04-11 16:50:39 +0000 UTC (1 container statuses recorded) Apr 11 16:50:47.579: INFO: Container pod4 ready: true, restart count 0 Apr 11 16:50:47.579: INFO: rs-pod3-q4hbm from sched-preemption-path-2130 started at 2024-04-11 16:50:36 +0000 UTC (1 container statuses recorded) Apr 11 16:50:47.579: INFO: Container pod3 ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] test/e2e/scheduling/predicates.go:443 STEP: Trying to schedule Pod with nonempty NodeSelector. 04/11/24 16:50:47.579 STEP: Considering event: Type = [Warning], Name = [restricted-pod.17c547de5bcc992a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1 node(s) were unschedulable. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 04/11/24 16:50:53.637 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:50:54.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-2737" for this suite. 04/11/24 16:50:54.641 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance] test/e2e/apps/daemon_set.go:834 [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:50:54.658 Apr 11 16:50:54.658: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/11/24 16:50:54.66 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:50:54.672 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:50:54.676 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should list and delete a collection of DaemonSets [Conformance] test/e2e/apps/daemon_set.go:834 STEP: Creating simple DaemonSet "daemon-set" 04/11/24 16:50:54.692 STEP: Check that daemon pods launch on every node of the cluster. 04/11/24 16:50:54.699 Apr 11 16:50:54.703: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:50:54.706: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:50:54.706: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:50:55.711: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:50:55.715: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:50:55.715: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:50:56.712: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:50:56.716: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 11 16:50:56.716: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: listing all DeamonSets 04/11/24 16:50:56.719 STEP: DeleteCollection of the DaemonSets 04/11/24 16:50:56.723 STEP: Verify that ReplicaSets have been deleted 04/11/24 16:50:56.729 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 Apr 11 16:50:56.739: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"7509996"},"items":null} Apr 11 16:50:56.743: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"7509996"},"items":[{"metadata":{"name":"daemon-set-926lb","generateName":"daemon-set-","namespace":"daemonsets-715","uid":"3bc58c61-f2e2-44b8-8de1-56b078730181","resourceVersion":"7509996","creationTimestamp":"2024-04-11T16:50:54Z","deletionTimestamp":"2024-04-11T16:51:26Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"71a78a49-3507-49fc-aff2-5e22cfad9adb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-11T16:50:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71a78a49-3507-49fc-aff2-5e22cfad9adb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-11T16:50:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.56\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-dzsh8","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-dzsh8","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"v126-worker","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["v126-worker"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-11T16:50:54Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-11T16:50:56Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-11T16:50:56Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-11T16:50:54Z"}],"hostIP":"172.22.0.2","podIP":"10.244.1.56","podIPs":[{"ip":"10.244.1.56"}],"startTime":"2024-04-11T16:50:54Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2024-04-11T16:50:55Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"containerd://a86706f50efca525b3b7cf0a7326c6b510de2a99921cfa01c9f0709e845662de","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-vvjq4","generateName":"daemon-set-","namespace":"daemonsets-715","uid":"825ca6a5-da2e-41bb-bd8d-0606c4a61e17","resourceVersion":"7509995","creationTimestamp":"2024-04-11T16:50:54Z","deletionTimestamp":"2024-04-11T16:51:26Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"71a78a49-3507-49fc-aff2-5e22cfad9adb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-11T16:50:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71a78a49-3507-49fc-aff2-5e22cfad9adb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-11T16:50:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.73\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-njnt2","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-njnt2","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"v126-worker2","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["v126-worker2"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-11T16:50:54Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-11T16:50:56Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-11T16:50:56Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-11T16:50:54Z"}],"hostIP":"172.22.0.3","podIP":"10.244.2.73","podIPs":[{"ip":"10.244.2.73"}],"startTime":"2024-04-11T16:50:54Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2024-04-11T16:50:55Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"containerd://f805716bd0938c16df950ae3b4831ab3f0c9d099944fea2a16c6258fbef89f74","started":true}],"qosClass":"BestEffort"}}]} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:50:56.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-715" for this suite. 04/11/24 16:50:56.755 ------------------------------ • [2.102 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should list and delete a collection of DaemonSets [Conformance] test/e2e/apps/daemon_set.go:834 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:50:54.658 Apr 11 16:50:54.658: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/11/24 16:50:54.66 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:50:54.672 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:50:54.676 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should list and delete a collection of DaemonSets [Conformance] test/e2e/apps/daemon_set.go:834 STEP: Creating simple DaemonSet "daemon-set" 04/11/24 16:50:54.692 STEP: Check that daemon pods launch on every node of the cluster. 04/11/24 16:50:54.699 Apr 11 16:50:54.703: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:50:54.706: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:50:54.706: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:50:55.711: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:50:55.715: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:50:55.715: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:50:56.712: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:50:56.716: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 11 16:50:56.716: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: listing all DeamonSets 04/11/24 16:50:56.719 STEP: DeleteCollection of the DaemonSets 04/11/24 16:50:56.723 STEP: Verify that ReplicaSets have been deleted 04/11/24 16:50:56.729 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 Apr 11 16:50:56.739: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"7509996"},"items":null} Apr 11 16:50:56.743: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"7509996"},"items":[{"metadata":{"name":"daemon-set-926lb","generateName":"daemon-set-","namespace":"daemonsets-715","uid":"3bc58c61-f2e2-44b8-8de1-56b078730181","resourceVersion":"7509996","creationTimestamp":"2024-04-11T16:50:54Z","deletionTimestamp":"2024-04-11T16:51:26Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"71a78a49-3507-49fc-aff2-5e22cfad9adb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-11T16:50:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71a78a49-3507-49fc-aff2-5e22cfad9adb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-11T16:50:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.56\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-dzsh8","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-dzsh8","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"v126-worker","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["v126-worker"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-11T16:50:54Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-11T16:50:56Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-11T16:50:56Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-11T16:50:54Z"}],"hostIP":"172.22.0.2","podIP":"10.244.1.56","podIPs":[{"ip":"10.244.1.56"}],"startTime":"2024-04-11T16:50:54Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2024-04-11T16:50:55Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"containerd://a86706f50efca525b3b7cf0a7326c6b510de2a99921cfa01c9f0709e845662de","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-vvjq4","generateName":"daemon-set-","namespace":"daemonsets-715","uid":"825ca6a5-da2e-41bb-bd8d-0606c4a61e17","resourceVersion":"7509995","creationTimestamp":"2024-04-11T16:50:54Z","deletionTimestamp":"2024-04-11T16:51:26Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"71a78a49-3507-49fc-aff2-5e22cfad9adb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-11T16:50:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71a78a49-3507-49fc-aff2-5e22cfad9adb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-11T16:50:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.73\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-njnt2","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-njnt2","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"v126-worker2","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["v126-worker2"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-11T16:50:54Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-11T16:50:56Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-11T16:50:56Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-11T16:50:54Z"}],"hostIP":"172.22.0.3","podIP":"10.244.2.73","podIPs":[{"ip":"10.244.2.73"}],"startTime":"2024-04-11T16:50:54Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2024-04-11T16:50:55Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"containerd://f805716bd0938c16df950ae3b4831ab3f0c9d099944fea2a16c6258fbef89f74","started":true}],"qosClass":"BestEffort"}}]} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:50:56.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-715" for this suite. 04/11/24 16:50:56.755 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/apps/daemon_set.go:385 [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:50:56.761 Apr 11 16:50:56.761: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/11/24 16:50:56.763 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:50:56.774 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:50:56.778 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/apps/daemon_set.go:385 Apr 11 16:50:56.795: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. 04/11/24 16:50:56.8 Apr 11 16:50:56.805: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:50:56.808: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:50:56.808: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:50:57.811: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:50:57.814: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:50:57.814: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:50:58.814: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:50:58.818: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 11 16:50:58.818: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Update daemon pods image. 04/11/24 16:50:58.833 STEP: Check that daemon pods images are updated. 04/11/24 16:50:58.846 Apr 11 16:50:58.850: INFO: Wrong image for pod: daemon-set-l5t6m. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 11 16:50:58.850: INFO: Wrong image for pod: daemon-set-w5g4n. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 11 16:50:58.855: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:50:59.859: INFO: Wrong image for pod: daemon-set-l5t6m. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 11 16:50:59.863: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:51:00.860: INFO: Wrong image for pod: daemon-set-l5t6m. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 11 16:51:00.865: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:51:01.859: INFO: Pod daemon-set-5jv65 is not available Apr 11 16:51:01.859: INFO: Wrong image for pod: daemon-set-l5t6m. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 11 16:51:01.864: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:51:02.864: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:51:03.860: INFO: Pod daemon-set-pr4wb is not available Apr 11 16:51:03.866: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. 04/11/24 16:51:03.866 Apr 11 16:51:03.874: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:51:03.877: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:51:03.877: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:51:04.882: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:51:04.887: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 11 16:51:04.887: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/11/24 16:51:04.905 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8258, will wait for the garbage collector to delete the pods 04/11/24 16:51:04.905 Apr 11 16:51:04.964: INFO: Deleting DaemonSet.extensions daemon-set took: 5.115801ms Apr 11 16:51:05.065: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.50197ms Apr 11 16:51:07.268: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:51:07.268: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 11 16:51:07.271: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"7510119"},"items":null} Apr 11 16:51:07.274: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"7510119"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:51:07.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-8258" for this suite. 04/11/24 16:51:07.287 ------------------------------ • [SLOW TEST] [10.531 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/apps/daemon_set.go:385 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:50:56.761 Apr 11 16:50:56.761: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/11/24 16:50:56.763 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:50:56.774 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:50:56.778 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/apps/daemon_set.go:385 Apr 11 16:50:56.795: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. 04/11/24 16:50:56.8 Apr 11 16:50:56.805: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:50:56.808: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:50:56.808: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:50:57.811: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:50:57.814: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:50:57.814: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:50:58.814: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:50:58.818: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 11 16:50:58.818: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Update daemon pods image. 04/11/24 16:50:58.833 STEP: Check that daemon pods images are updated. 04/11/24 16:50:58.846 Apr 11 16:50:58.850: INFO: Wrong image for pod: daemon-set-l5t6m. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 11 16:50:58.850: INFO: Wrong image for pod: daemon-set-w5g4n. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 11 16:50:58.855: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:50:59.859: INFO: Wrong image for pod: daemon-set-l5t6m. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 11 16:50:59.863: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:51:00.860: INFO: Wrong image for pod: daemon-set-l5t6m. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 11 16:51:00.865: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:51:01.859: INFO: Pod daemon-set-5jv65 is not available Apr 11 16:51:01.859: INFO: Wrong image for pod: daemon-set-l5t6m. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 11 16:51:01.864: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:51:02.864: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:51:03.860: INFO: Pod daemon-set-pr4wb is not available Apr 11 16:51:03.866: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. 04/11/24 16:51:03.866 Apr 11 16:51:03.874: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:51:03.877: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:51:03.877: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:51:04.882: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:51:04.887: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 11 16:51:04.887: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/11/24 16:51:04.905 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8258, will wait for the garbage collector to delete the pods 04/11/24 16:51:04.905 Apr 11 16:51:04.964: INFO: Deleting DaemonSet.extensions daemon-set took: 5.115801ms Apr 11 16:51:05.065: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.50197ms Apr 11 16:51:07.268: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:51:07.268: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 11 16:51:07.271: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"7510119"},"items":null} Apr 11 16:51:07.274: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"7510119"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:51:07.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-8258" for this suite. 04/11/24 16:51:07.287 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should apply a finalizer to a Namespace [Conformance] test/e2e/apimachinery/namespace.go:394 [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:51:07.305 Apr 11 16:51:07.305: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/11/24 16:51:07.307 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:51:07.319 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:51:07.323 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should apply a finalizer to a Namespace [Conformance] test/e2e/apimachinery/namespace.go:394 STEP: Creating namespace "e2e-ns-jr9td" 04/11/24 16:51:07.327 Apr 11 16:51:07.338: INFO: Namespace "e2e-ns-jr9td-2807" has []v1.FinalizerName{"kubernetes"} STEP: Adding e2e finalizer to namespace "e2e-ns-jr9td-2807" 04/11/24 16:51:07.338 Apr 11 16:51:07.345: INFO: Namespace "e2e-ns-jr9td-2807" has []v1.FinalizerName{"kubernetes", "e2e.example.com/fakeFinalizer"} STEP: Removing e2e finalizer from namespace "e2e-ns-jr9td-2807" 04/11/24 16:51:07.345 Apr 11 16:51:07.354: INFO: Namespace "e2e-ns-jr9td-2807" has []v1.FinalizerName{"kubernetes"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:51:07.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-6055" for this suite. 04/11/24 16:51:07.359 STEP: Destroying namespace "e2e-ns-jr9td-2807" for this suite. 04/11/24 16:51:07.364 ------------------------------ • [0.063 seconds] [sig-api-machinery] Namespaces [Serial] test/e2e/apimachinery/framework.go:23 should apply a finalizer to a Namespace [Conformance] test/e2e/apimachinery/namespace.go:394 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:51:07.305 Apr 11 16:51:07.305: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/11/24 16:51:07.307 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:51:07.319 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:51:07.323 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should apply a finalizer to a Namespace [Conformance] test/e2e/apimachinery/namespace.go:394 STEP: Creating namespace "e2e-ns-jr9td" 04/11/24 16:51:07.327 Apr 11 16:51:07.338: INFO: Namespace "e2e-ns-jr9td-2807" has []v1.FinalizerName{"kubernetes"} STEP: Adding e2e finalizer to namespace "e2e-ns-jr9td-2807" 04/11/24 16:51:07.338 Apr 11 16:51:07.345: INFO: Namespace "e2e-ns-jr9td-2807" has []v1.FinalizerName{"kubernetes", "e2e.example.com/fakeFinalizer"} STEP: Removing e2e finalizer from namespace "e2e-ns-jr9td-2807" 04/11/24 16:51:07.345 Apr 11 16:51:07.354: INFO: Namespace "e2e-ns-jr9td-2807" has []v1.FinalizerName{"kubernetes"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:51:07.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-6055" for this suite. 04/11/24 16:51:07.359 STEP: Destroying namespace "e2e-ns-jr9td-2807" for this suite. 04/11/24 16:51:07.364 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] test/e2e/scheduling/predicates.go:331 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:51:07.378 Apr 11 16:51:07.378: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 16:51:07.38 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:51:07.391 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:51:07.395 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 16:51:07.399: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 16:51:07.407: INFO: Waiting for terminating namespaces to be deleted... Apr 11 16:51:07.410: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 16:51:07.416: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 16:51:07.416: INFO: Container loopdev ready: true, restart count 0 Apr 11 16:51:07.416: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 16:51:07.416: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 16:51:07.416: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 16:51:07.416: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] test/e2e/scheduling/predicates.go:331 STEP: verifying the node has the label node v126-worker2 04/11/24 16:51:07.437 Apr 11 16:51:07.447: INFO: Pod create-loop-devs-tmv9n requesting resource cpu=0m on Node v126-worker2 Apr 11 16:51:07.447: INFO: Pod kindnet-l6j8p requesting resource cpu=100m on Node v126-worker2 Apr 11 16:51:07.447: INFO: Pod kube-proxy-zhx9l requesting resource cpu=0m on Node v126-worker2 STEP: Starting Pods to consume most of the cluster CPU. 04/11/24 16:51:07.447 Apr 11 16:51:07.447: INFO: Creating a pod which consumes cpu=61530m on Node v126-worker2 Apr 11 16:51:07.455: INFO: Waiting up to 5m0s for pod "filler-pod-64558cc2-ccec-484e-9514-925c17fde859" in namespace "sched-pred-8005" to be "running" Apr 11 16:51:07.458: INFO: Pod "filler-pod-64558cc2-ccec-484e-9514-925c17fde859": Phase="Pending", Reason="", readiness=false. Elapsed: 3.258395ms Apr 11 16:51:09.463: INFO: Pod "filler-pod-64558cc2-ccec-484e-9514-925c17fde859": Phase="Running", Reason="", readiness=true. Elapsed: 2.008278961s Apr 11 16:51:09.463: INFO: Pod "filler-pod-64558cc2-ccec-484e-9514-925c17fde859" satisfied condition "running" STEP: Creating another pod that requires unavailable amount of CPU. 04/11/24 16:51:09.463 STEP: Considering event: Type = [Normal], Name = [filler-pod-64558cc2-ccec-484e-9514-925c17fde859.17c547e193f62559], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8005/filler-pod-64558cc2-ccec-484e-9514-925c17fde859 to v126-worker2] 04/11/24 16:51:09.468 STEP: Considering event: Type = [Normal], Name = [filler-pod-64558cc2-ccec-484e-9514-925c17fde859.17c547e1b75a1456], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 16:51:09.468 STEP: Considering event: Type = [Normal], Name = [filler-pod-64558cc2-ccec-484e-9514-925c17fde859.17c547e1b84735d1], Reason = [Created], Message = [Created container filler-pod-64558cc2-ccec-484e-9514-925c17fde859] 04/11/24 16:51:09.468 STEP: Considering event: Type = [Normal], Name = [filler-pod-64558cc2-ccec-484e-9514-925c17fde859.17c547e1c71fc0f4], Reason = [Started], Message = [Started container filler-pod-64558cc2-ccec-484e-9514-925c17fde859] 04/11/24 16:51:09.468 STEP: Considering event: Type = [Warning], Name = [additional-pod.17c547e20c2279c3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 Insufficient cpu, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1 node(s) were unschedulable. preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling..] 04/11/24 16:51:09.48 STEP: removing the label node off the node v126-worker2 04/11/24 16:51:10.481 STEP: verifying the node doesn't have the label node 04/11/24 16:51:10.496 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:51:10.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-8005" for this suite. 04/11/24 16:51:10.505 ------------------------------ • [3.132 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] test/e2e/scheduling/predicates.go:331 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:51:07.378 Apr 11 16:51:07.378: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 16:51:07.38 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:51:07.391 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:51:07.395 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 16:51:07.399: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 16:51:07.407: INFO: Waiting for terminating namespaces to be deleted... Apr 11 16:51:07.410: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 16:51:07.416: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 16:51:07.416: INFO: Container loopdev ready: true, restart count 0 Apr 11 16:51:07.416: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 16:51:07.416: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 16:51:07.416: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 16:51:07.416: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] test/e2e/scheduling/predicates.go:331 STEP: verifying the node has the label node v126-worker2 04/11/24 16:51:07.437 Apr 11 16:51:07.447: INFO: Pod create-loop-devs-tmv9n requesting resource cpu=0m on Node v126-worker2 Apr 11 16:51:07.447: INFO: Pod kindnet-l6j8p requesting resource cpu=100m on Node v126-worker2 Apr 11 16:51:07.447: INFO: Pod kube-proxy-zhx9l requesting resource cpu=0m on Node v126-worker2 STEP: Starting Pods to consume most of the cluster CPU. 04/11/24 16:51:07.447 Apr 11 16:51:07.447: INFO: Creating a pod which consumes cpu=61530m on Node v126-worker2 Apr 11 16:51:07.455: INFO: Waiting up to 5m0s for pod "filler-pod-64558cc2-ccec-484e-9514-925c17fde859" in namespace "sched-pred-8005" to be "running" Apr 11 16:51:07.458: INFO: Pod "filler-pod-64558cc2-ccec-484e-9514-925c17fde859": Phase="Pending", Reason="", readiness=false. Elapsed: 3.258395ms Apr 11 16:51:09.463: INFO: Pod "filler-pod-64558cc2-ccec-484e-9514-925c17fde859": Phase="Running", Reason="", readiness=true. Elapsed: 2.008278961s Apr 11 16:51:09.463: INFO: Pod "filler-pod-64558cc2-ccec-484e-9514-925c17fde859" satisfied condition "running" STEP: Creating another pod that requires unavailable amount of CPU. 04/11/24 16:51:09.463 STEP: Considering event: Type = [Normal], Name = [filler-pod-64558cc2-ccec-484e-9514-925c17fde859.17c547e193f62559], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8005/filler-pod-64558cc2-ccec-484e-9514-925c17fde859 to v126-worker2] 04/11/24 16:51:09.468 STEP: Considering event: Type = [Normal], Name = [filler-pod-64558cc2-ccec-484e-9514-925c17fde859.17c547e1b75a1456], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 16:51:09.468 STEP: Considering event: Type = [Normal], Name = [filler-pod-64558cc2-ccec-484e-9514-925c17fde859.17c547e1b84735d1], Reason = [Created], Message = [Created container filler-pod-64558cc2-ccec-484e-9514-925c17fde859] 04/11/24 16:51:09.468 STEP: Considering event: Type = [Normal], Name = [filler-pod-64558cc2-ccec-484e-9514-925c17fde859.17c547e1c71fc0f4], Reason = [Started], Message = [Started container filler-pod-64558cc2-ccec-484e-9514-925c17fde859] 04/11/24 16:51:09.468 STEP: Considering event: Type = [Warning], Name = [additional-pod.17c547e20c2279c3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 Insufficient cpu, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1 node(s) were unschedulable. preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling..] 04/11/24 16:51:09.48 STEP: removing the label node off the node v126-worker2 04/11/24 16:51:10.481 STEP: verifying the node doesn't have the label node 04/11/24 16:51:10.496 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:51:10.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-8005" for this suite. 04/11/24 16:51:10.505 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] test/e2e/apps/daemon_set.go:205 [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:51:10.519 Apr 11 16:51:10.519: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/11/24 16:51:10.521 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:51:10.533 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:51:10.536 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should run and stop complex daemon [Conformance] test/e2e/apps/daemon_set.go:205 Apr 11 16:51:10.552: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. 04/11/24 16:51:10.558 Apr 11 16:51:10.561: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:51:10.561: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set STEP: Change node label to blue, check that daemon pod is launched. 04/11/24 16:51:10.561 Apr 11 16:51:10.582: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:51:10.582: INFO: Node v126-worker2 is running 0 daemon pod, expected 1 Apr 11 16:51:11.587: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:51:11.587: INFO: Node v126-worker2 is running 0 daemon pod, expected 1 Apr 11 16:51:12.586: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:51:12.586: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set STEP: Update the node label to green, and wait for daemons to be unscheduled 04/11/24 16:51:12.59 Apr 11 16:51:12.608: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:51:12.608: INFO: Number of running nodes: 0, number of available pods: 1 in daemonset daemon-set Apr 11 16:51:13.613: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:51:13.613: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate 04/11/24 16:51:13.613 Apr 11 16:51:13.623: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:51:13.623: INFO: Node v126-worker2 is running 0 daemon pod, expected 1 Apr 11 16:51:14.628: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:51:14.628: INFO: Node v126-worker2 is running 0 daemon pod, expected 1 Apr 11 16:51:15.628: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:51:15.628: INFO: Node v126-worker2 is running 0 daemon pod, expected 1 Apr 11 16:51:16.627: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:51:16.627: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/11/24 16:51:16.633 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1839, will wait for the garbage collector to delete the pods 04/11/24 16:51:16.633 Apr 11 16:51:16.693: INFO: Deleting DaemonSet.extensions daemon-set took: 5.106818ms Apr 11 16:51:16.793: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.209818ms Apr 11 16:51:19.197: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:51:19.197: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 11 16:51:19.200: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"7510269"},"items":null} Apr 11 16:51:19.203: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"7510269"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:51:19.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-1839" for this suite. 04/11/24 16:51:19.228 ------------------------------ • [SLOW TEST] [8.715 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] test/e2e/apps/daemon_set.go:205 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:51:10.519 Apr 11 16:51:10.519: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/11/24 16:51:10.521 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:51:10.533 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:51:10.536 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should run and stop complex daemon [Conformance] test/e2e/apps/daemon_set.go:205 Apr 11 16:51:10.552: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. 04/11/24 16:51:10.558 Apr 11 16:51:10.561: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:51:10.561: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set STEP: Change node label to blue, check that daemon pod is launched. 04/11/24 16:51:10.561 Apr 11 16:51:10.582: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:51:10.582: INFO: Node v126-worker2 is running 0 daemon pod, expected 1 Apr 11 16:51:11.587: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:51:11.587: INFO: Node v126-worker2 is running 0 daemon pod, expected 1 Apr 11 16:51:12.586: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:51:12.586: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set STEP: Update the node label to green, and wait for daemons to be unscheduled 04/11/24 16:51:12.59 Apr 11 16:51:12.608: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:51:12.608: INFO: Number of running nodes: 0, number of available pods: 1 in daemonset daemon-set Apr 11 16:51:13.613: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:51:13.613: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate 04/11/24 16:51:13.613 Apr 11 16:51:13.623: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:51:13.623: INFO: Node v126-worker2 is running 0 daemon pod, expected 1 Apr 11 16:51:14.628: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:51:14.628: INFO: Node v126-worker2 is running 0 daemon pod, expected 1 Apr 11 16:51:15.628: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:51:15.628: INFO: Node v126-worker2 is running 0 daemon pod, expected 1 Apr 11 16:51:16.627: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:51:16.627: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/11/24 16:51:16.633 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1839, will wait for the garbage collector to delete the pods 04/11/24 16:51:16.633 Apr 11 16:51:16.693: INFO: Deleting DaemonSet.extensions daemon-set took: 5.106818ms Apr 11 16:51:16.793: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.209818ms Apr 11 16:51:19.197: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:51:19.197: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 11 16:51:19.200: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"7510269"},"items":null} Apr 11 16:51:19.203: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"7510269"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:51:19.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-1839" for this suite. 04/11/24 16:51:19.228 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should apply an update to a Namespace [Conformance] test/e2e/apimachinery/namespace.go:366 [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:51:19.242 Apr 11 16:51:19.242: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/11/24 16:51:19.244 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:51:19.255 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:51:19.259 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should apply an update to a Namespace [Conformance] test/e2e/apimachinery/namespace.go:366 STEP: Updating Namespace "namespaces-9239" 04/11/24 16:51:19.263 Apr 11 16:51:19.270: INFO: Namespace "namespaces-9239" now has labels, map[string]string{"e2e-framework":"namespaces", "e2e-run":"e1644d4e-fd15-48f5-aac7-e0bc44868a17", "kubernetes.io/metadata.name":"namespaces-9239", "namespaces-9239":"updated", "pod-security.kubernetes.io/enforce":"baseline"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:51:19.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-9239" for this suite. 04/11/24 16:51:19.274 ------------------------------ • [0.037 seconds] [sig-api-machinery] Namespaces [Serial] test/e2e/apimachinery/framework.go:23 should apply an update to a Namespace [Conformance] test/e2e/apimachinery/namespace.go:366 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:51:19.242 Apr 11 16:51:19.242: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/11/24 16:51:19.244 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:51:19.255 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:51:19.259 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should apply an update to a Namespace [Conformance] test/e2e/apimachinery/namespace.go:366 STEP: Updating Namespace "namespaces-9239" 04/11/24 16:51:19.263 Apr 11 16:51:19.270: INFO: Namespace "namespaces-9239" now has labels, map[string]string{"e2e-framework":"namespaces", "e2e-run":"e1644d4e-fd15-48f5-aac7-e0bc44868a17", "kubernetes.io/metadata.name":"namespaces-9239", "namespaces-9239":"updated", "pod-security.kubernetes.io/enforce":"baseline"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:51:19.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-9239" for this suite. 04/11/24 16:51:19.274 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] test/e2e/apps/daemon_set.go:443 [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:51:19.285 Apr 11 16:51:19.285: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/11/24 16:51:19.286 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:51:19.296 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:51:19.3 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should rollback without unnecessary restarts [Conformance] test/e2e/apps/daemon_set.go:443 Apr 11 16:51:19.319: FAIL: Conformance test suite needs a cluster with at least 2 nodes. Expected : 1 to be > : 1 Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func5.9() test/e2e/apps/daemon_set.go:446 +0x1b6 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 Apr 11 16:51:19.325: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"7510280"},"items":null} Apr 11 16:51:19.328: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"7510280"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:51:19.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 STEP: dump namespace information after failure 04/11/24 16:51:19.34 STEP: Collecting events from namespace "daemonsets-8619". 04/11/24 16:51:19.34 STEP: Found 0 events. 04/11/24 16:51:19.343 Apr 11 16:51:19.346: INFO: POD NODE PHASE GRACE CONDITIONS Apr 11 16:51:19.346: INFO: Apr 11 16:51:19.350: INFO: Logging node info for node v126-control-plane Apr 11 16:51:19.353: INFO: Node Info: &Node{ObjectMeta:{v126-control-plane 3a64757e-5950-42e6-b8ed-4667f760117e 7509697 0 2024-02-15 12:43:04 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v126-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2024-02-15 12:43:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2024-02-15 12:43:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2024-02-15 12:43:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2024-04-11 16:49:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v126/v126-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2024-04-11 16:49:53 +0000 UTC,LastTransitionTime:2024-02-15 12:42:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2024-04-11 16:49:53 +0000 UTC,LastTransitionTime:2024-02-15 12:42:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2024-04-11 16:49:53 +0000 UTC,LastTransitionTime:2024-02-15 12:42:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2024-04-11 16:49:53 +0000 UTC,LastTransitionTime:2024-02-15 12:43:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.22.0.4,},NodeAddress{Type:Hostname,Address:v126-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a96e30d08f8c42b585519e2395c12ea2,SystemUUID:a3f13d5f-0717-4c0d-a2df-008e7d843a90,BootID:3ece24be-6f26-4926-9346-83e0950952a5,KernelVersion:5.15.0-53-generic,OSImage:Debian GNU/Linux 11 (bullseye),ContainerRuntimeVersion:containerd://1.7.1,KubeletVersion:v1.26.6,KubeProxyVersion:v1.26.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ba23b53b0e943e1556160fd3d7e445268699b578d6d1ffcce645a3cfafebb3db registry.k8s.io/kube-apiserver:v1.26.6],SizeBytes:80511487,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:719bf3e60c026520ed06d4f65a6df78f53a838e8675c058f25582d5067117d99 registry.k8s.io/kube-controller-manager:v1.26.6],SizeBytes:68657293,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ffc23ebf13cca095f49e15bc00a2fc75fc6ae75e10104169680d9cac711339b8 registry.k8s.io/kube-proxy:v1.26.6],SizeBytes:67229690,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:b67a0068e2439c496b04bb021d953b966868421451aa88f2c3701c6b4ab77d4f registry.k8s.io/kube-scheduler:v1.26.6],SizeBytes:57880717,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230511-dc714da8],SizeBytes:27731571,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20230511-dc714da8],SizeBytes:19351145,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:23d4ae0566b98dfee53d4b7a9ef824b6ed1c6b3a8f52bab927e5521f871b5104 docker.io/aquasec/kube-bench:v0.6.10],SizeBytes:18243491,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230510-486859a6],SizeBytes:3052318,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 11 16:51:19.353: INFO: Logging kubelet events for node v126-control-plane Apr 11 16:51:19.357: INFO: Logging pods the kubelet thinks is on node v126-control-plane Apr 11 16:51:19.385: INFO: etcd-v126-control-plane started at 2024-02-15 12:43:08 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.385: INFO: Container etcd ready: true, restart count 0 Apr 11 16:51:19.385: INFO: kube-scheduler-v126-control-plane started at 2024-02-15 12:43:08 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.385: INFO: Container kube-scheduler ready: true, restart count 0 Apr 11 16:51:19.385: INFO: kube-proxy-lxqfk started at 2024-02-15 12:43:20 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.385: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 16:51:19.385: INFO: kindnet-vn4j4 started at 2024-02-15 12:43:20 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.385: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 16:51:19.385: INFO: local-path-provisioner-6bd6454576-2g84t started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.385: INFO: Container local-path-provisioner ready: true, restart count 0 Apr 11 16:51:19.385: INFO: create-loop-devs-d8k28 started at 2024-02-15 12:43:26 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.385: INFO: Container loopdev ready: true, restart count 0 Apr 11 16:51:19.385: INFO: kube-apiserver-v126-control-plane started at 2024-02-15 12:43:09 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.385: INFO: Container kube-apiserver ready: true, restart count 0 Apr 11 16:51:19.385: INFO: kube-controller-manager-v126-control-plane started at 2024-02-15 12:43:09 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.385: INFO: Container kube-controller-manager ready: true, restart count 0 Apr 11 16:51:19.385: INFO: coredns-787d4945fb-w6k86 started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.385: INFO: Container coredns ready: true, restart count 0 Apr 11 16:51:19.385: INFO: coredns-787d4945fb-xp5nv started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.385: INFO: Container coredns ready: true, restart count 0 Apr 11 16:51:19.451: INFO: Latency metrics for node v126-control-plane Apr 11 16:51:19.451: INFO: Logging node info for node v126-worker Apr 11 16:51:19.454: INFO: Node Info: &Node{ObjectMeta:{v126-worker d69cee07-558d-4498-86d9-cff1abedd857 7509340 0 2024-02-15 12:43:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v126-worker kubernetes.io/os:linux topology.hostpath.csi/node:v126-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2024-02-15 12:43:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2024-02-15 12:43:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {e2e.test Update v1 2024-03-28 18:03:36 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}}}} status} {kube-controller-manager Update v1 2024-03-28 19:11:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}} } {kubectl Update v1 2024-03-28 19:11:09 +0000 UTC FieldsV1 {"f:spec":{"f:unschedulable":{}}} } {kubelet Update v1 2024-04-11 16:48:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v126/v126-worker,Unschedulable:true,Taints:[]Taint{Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:2024-03-28 19:11:09 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{1 0} {} 1 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{1 0} {} 1 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2024-04-11 16:48:02 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2024-04-11 16:48:02 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2024-04-11 16:48:02 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2024-04-11 16:48:02 +0000 UTC,LastTransitionTime:2024-02-15 12:43:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.22.0.2,},NodeAddress{Type:Hostname,Address:v126-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d18212626141459c831725483d7679ab,SystemUUID:398bd568-4555-4b1a-8660-f75be5056848,BootID:3ece24be-6f26-4926-9346-83e0950952a5,KernelVersion:5.15.0-53-generic,OSImage:Debian GNU/Linux 11 (bullseye),ContainerRuntimeVersion:containerd://1.7.1,KubeletVersion:v1.26.6,KubeProxyVersion:v1.26.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[docker.io/litmuschaos/go-runner@sha256:b4aaa2ee36bf687dd0f147ced7dce708398fae6d8410067c9ad9a90f162d55e5 docker.io/litmuschaos/go-runner:2.14.0],SizeBytes:170207512,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ba23b53b0e943e1556160fd3d7e445268699b578d6d1ffcce645a3cfafebb3db registry.k8s.io/kube-apiserver:v1.26.6],SizeBytes:80511487,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:719bf3e60c026520ed06d4f65a6df78f53a838e8675c058f25582d5067117d99 registry.k8s.io/kube-controller-manager:v1.26.6],SizeBytes:68657293,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ffc23ebf13cca095f49e15bc00a2fc75fc6ae75e10104169680d9cac711339b8 registry.k8s.io/kube-proxy:v1.26.6],SizeBytes:67229690,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:b67a0068e2439c496b04bb021d953b966868421451aa88f2c3701c6b4ab77d4f registry.k8s.io/kube-scheduler:v1.26.6],SizeBytes:57880717,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2 registry.k8s.io/etcd:3.5.10-0],SizeBytes:56649232,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:e64fe49f059f513a09c772a8972172b2af6833d092c06cc311171d7135e4525a docker.io/aquasec/kube-hunter:0.6.8],SizeBytes:38278203,},ContainerImage{Names:[docker.io/litmuschaos/chaos-operator@sha256:69b1a6ff1409fc80cf169503e29d10e049b46108e57436e452e3800f5f911d70 docker.io/litmuschaos/chaos-operator:2.14.0],SizeBytes:28963838,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230511-dc714da8],SizeBytes:27731571,},ContainerImage{Names:[docker.io/litmuschaos/chaos-runner@sha256:a5fcf3f1766975ec6e4730c0aefdf9705af20c67d9ff68372168c8856acba7af docker.io/litmuschaos/chaos-runner:2.14.0],SizeBytes:26125622,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20230511-dc714da8],SizeBytes:19351145,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:23d4ae0566b98dfee53d4b7a9ef824b6ed1c6b3a8f52bab927e5521f871b5104 docker.io/aquasec/kube-bench:v0.6.10],SizeBytes:18243491,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:fc259355994e6c6c1025a7cd2d1bdbf201708e9e11ef1dfd3ef787a7ce45730d registry.k8s.io/build-image/distroless-iptables:v0.2.9],SizeBytes:9501695,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230510-486859a6],SizeBytes:3052318,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 11 16:51:19.455: INFO: Logging kubelet events for node v126-worker Apr 11 16:51:19.458: INFO: Logging pods the kubelet thinks is on node v126-worker Apr 11 16:51:19.480: INFO: create-loop-devs-qf7hw started at 2024-02-15 12:43:26 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.480: INFO: Container loopdev ready: true, restart count 0 Apr 11 16:51:19.480: INFO: kindnet-llt78 started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.480: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 16:51:19.480: INFO: kube-proxy-6gjpv started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.480: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 16:51:19.706: INFO: Latency metrics for node v126-worker Apr 11 16:51:19.706: INFO: Logging node info for node v126-worker2 Apr 11 16:51:19.709: INFO: Node Info: &Node{ObjectMeta:{v126-worker2 325f688d-d472-4d00-af05-b1602ff4d011 7510270 0 2024-02-15 12:43:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v126-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:v126-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2024-02-15 12:43:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2024-02-15 12:43:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2024-03-23 10:52:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2024-04-11 16:46:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {e2e.test Update v1 2024-04-11 16:50:32 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakecpu":{}},"f:capacity":{"f:example.com/fakecpu":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v126/v126-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2024-04-11 16:46:55 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2024-04-11 16:46:55 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2024-04-11 16:46:55 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2024-04-11 16:46:55 +0000 UTC,LastTransitionTime:2024-02-15 12:43:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.22.0.3,},NodeAddress{Type:Hostname,Address:v126-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9a4f500a92ab44e68eb943ba261bf2b3,SystemUUID:3a962073-037f-4c28-a122-8f4b5dfc4ca0,BootID:3ece24be-6f26-4926-9346-83e0950952a5,KernelVersion:5.15.0-53-generic,OSImage:Debian GNU/Linux 11 (bullseye),ContainerRuntimeVersion:containerd://1.7.1,KubeletVersion:v1.26.6,KubeProxyVersion:v1.26.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/litmuschaos/go-runner@sha256:b4aaa2ee36bf687dd0f147ced7dce708398fae6d8410067c9ad9a90f162d55e5 docker.io/litmuschaos/go-runner:2.14.0],SizeBytes:170207512,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ba23b53b0e943e1556160fd3d7e445268699b578d6d1ffcce645a3cfafebb3db registry.k8s.io/kube-apiserver:v1.26.6],SizeBytes:80511487,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:719bf3e60c026520ed06d4f65a6df78f53a838e8675c058f25582d5067117d99 registry.k8s.io/kube-controller-manager:v1.26.6],SizeBytes:68657293,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ffc23ebf13cca095f49e15bc00a2fc75fc6ae75e10104169680d9cac711339b8 registry.k8s.io/kube-proxy:v1.26.6],SizeBytes:67229690,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:b67a0068e2439c496b04bb021d953b966868421451aa88f2c3701c6b4ab77d4f registry.k8s.io/kube-scheduler:v1.26.6],SizeBytes:57880717,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2 registry.k8s.io/etcd:3.5.10-0],SizeBytes:56649232,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/litmuschaos/chaos-operator@sha256:69b1a6ff1409fc80cf169503e29d10e049b46108e57436e452e3800f5f911d70 docker.io/litmuschaos/chaos-operator:2.14.0],SizeBytes:28963838,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230511-dc714da8],SizeBytes:27731571,},ContainerImage{Names:[docker.io/litmuschaos/chaos-runner@sha256:a5fcf3f1766975ec6e4730c0aefdf9705af20c67d9ff68372168c8856acba7af docker.io/litmuschaos/chaos-runner:2.14.0],SizeBytes:26125622,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20230511-dc714da8],SizeBytes:19351145,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230510-486859a6],SizeBytes:3052318,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 11 16:51:19.710: INFO: Logging kubelet events for node v126-worker2 Apr 11 16:51:19.714: INFO: Logging pods the kubelet thinks is on node v126-worker2 Apr 11 16:51:19.737: INFO: create-loop-devs-tmv9n started at 2024-02-15 12:43:26 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.737: INFO: Container loopdev ready: true, restart count 0 Apr 11 16:51:19.737: INFO: kube-proxy-zhx9l started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.737: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 16:51:19.737: INFO: kindnet-l6j8p started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.737: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 16:51:20.072: INFO: Latency metrics for node v126-worker2 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-8619" for this suite. 04/11/24 16:51:20.072 ------------------------------ • [FAILED] [0.794 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 [It] should rollback without unnecessary restarts [Conformance] test/e2e/apps/daemon_set.go:443 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:51:19.285 Apr 11 16:51:19.285: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/11/24 16:51:19.286 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:51:19.296 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:51:19.3 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should rollback without unnecessary restarts [Conformance] test/e2e/apps/daemon_set.go:443 Apr 11 16:51:19.319: FAIL: Conformance test suite needs a cluster with at least 2 nodes. Expected : 1 to be > : 1 Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func5.9() test/e2e/apps/daemon_set.go:446 +0x1b6 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 Apr 11 16:51:19.325: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"7510280"},"items":null} Apr 11 16:51:19.328: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"7510280"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:51:19.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 STEP: dump namespace information after failure 04/11/24 16:51:19.34 STEP: Collecting events from namespace "daemonsets-8619". 04/11/24 16:51:19.34 STEP: Found 0 events. 04/11/24 16:51:19.343 Apr 11 16:51:19.346: INFO: POD NODE PHASE GRACE CONDITIONS Apr 11 16:51:19.346: INFO: Apr 11 16:51:19.350: INFO: Logging node info for node v126-control-plane Apr 11 16:51:19.353: INFO: Node Info: &Node{ObjectMeta:{v126-control-plane 3a64757e-5950-42e6-b8ed-4667f760117e 7509697 0 2024-02-15 12:43:04 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v126-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2024-02-15 12:43:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2024-02-15 12:43:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2024-02-15 12:43:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2024-04-11 16:49:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v126/v126-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2024-04-11 16:49:53 +0000 UTC,LastTransitionTime:2024-02-15 12:42:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2024-04-11 16:49:53 +0000 UTC,LastTransitionTime:2024-02-15 12:42:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2024-04-11 16:49:53 +0000 UTC,LastTransitionTime:2024-02-15 12:42:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2024-04-11 16:49:53 +0000 UTC,LastTransitionTime:2024-02-15 12:43:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.22.0.4,},NodeAddress{Type:Hostname,Address:v126-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a96e30d08f8c42b585519e2395c12ea2,SystemUUID:a3f13d5f-0717-4c0d-a2df-008e7d843a90,BootID:3ece24be-6f26-4926-9346-83e0950952a5,KernelVersion:5.15.0-53-generic,OSImage:Debian GNU/Linux 11 (bullseye),ContainerRuntimeVersion:containerd://1.7.1,KubeletVersion:v1.26.6,KubeProxyVersion:v1.26.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ba23b53b0e943e1556160fd3d7e445268699b578d6d1ffcce645a3cfafebb3db registry.k8s.io/kube-apiserver:v1.26.6],SizeBytes:80511487,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:719bf3e60c026520ed06d4f65a6df78f53a838e8675c058f25582d5067117d99 registry.k8s.io/kube-controller-manager:v1.26.6],SizeBytes:68657293,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ffc23ebf13cca095f49e15bc00a2fc75fc6ae75e10104169680d9cac711339b8 registry.k8s.io/kube-proxy:v1.26.6],SizeBytes:67229690,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:b67a0068e2439c496b04bb021d953b966868421451aa88f2c3701c6b4ab77d4f registry.k8s.io/kube-scheduler:v1.26.6],SizeBytes:57880717,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230511-dc714da8],SizeBytes:27731571,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20230511-dc714da8],SizeBytes:19351145,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:23d4ae0566b98dfee53d4b7a9ef824b6ed1c6b3a8f52bab927e5521f871b5104 docker.io/aquasec/kube-bench:v0.6.10],SizeBytes:18243491,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230510-486859a6],SizeBytes:3052318,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 11 16:51:19.353: INFO: Logging kubelet events for node v126-control-plane Apr 11 16:51:19.357: INFO: Logging pods the kubelet thinks is on node v126-control-plane Apr 11 16:51:19.385: INFO: etcd-v126-control-plane started at 2024-02-15 12:43:08 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.385: INFO: Container etcd ready: true, restart count 0 Apr 11 16:51:19.385: INFO: kube-scheduler-v126-control-plane started at 2024-02-15 12:43:08 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.385: INFO: Container kube-scheduler ready: true, restart count 0 Apr 11 16:51:19.385: INFO: kube-proxy-lxqfk started at 2024-02-15 12:43:20 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.385: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 16:51:19.385: INFO: kindnet-vn4j4 started at 2024-02-15 12:43:20 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.385: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 16:51:19.385: INFO: local-path-provisioner-6bd6454576-2g84t started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.385: INFO: Container local-path-provisioner ready: true, restart count 0 Apr 11 16:51:19.385: INFO: create-loop-devs-d8k28 started at 2024-02-15 12:43:26 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.385: INFO: Container loopdev ready: true, restart count 0 Apr 11 16:51:19.385: INFO: kube-apiserver-v126-control-plane started at 2024-02-15 12:43:09 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.385: INFO: Container kube-apiserver ready: true, restart count 0 Apr 11 16:51:19.385: INFO: kube-controller-manager-v126-control-plane started at 2024-02-15 12:43:09 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.385: INFO: Container kube-controller-manager ready: true, restart count 0 Apr 11 16:51:19.385: INFO: coredns-787d4945fb-w6k86 started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.385: INFO: Container coredns ready: true, restart count 0 Apr 11 16:51:19.385: INFO: coredns-787d4945fb-xp5nv started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.385: INFO: Container coredns ready: true, restart count 0 Apr 11 16:51:19.451: INFO: Latency metrics for node v126-control-plane Apr 11 16:51:19.451: INFO: Logging node info for node v126-worker Apr 11 16:51:19.454: INFO: Node Info: &Node{ObjectMeta:{v126-worker d69cee07-558d-4498-86d9-cff1abedd857 7509340 0 2024-02-15 12:43:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v126-worker kubernetes.io/os:linux topology.hostpath.csi/node:v126-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2024-02-15 12:43:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2024-02-15 12:43:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {e2e.test Update v1 2024-03-28 18:03:36 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}}}} status} {kube-controller-manager Update v1 2024-03-28 19:11:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}} } {kubectl Update v1 2024-03-28 19:11:09 +0000 UTC FieldsV1 {"f:spec":{"f:unschedulable":{}}} } {kubelet Update v1 2024-04-11 16:48:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v126/v126-worker,Unschedulable:true,Taints:[]Taint{Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:2024-03-28 19:11:09 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{1 0} {} 1 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{1 0} {} 1 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2024-04-11 16:48:02 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2024-04-11 16:48:02 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2024-04-11 16:48:02 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2024-04-11 16:48:02 +0000 UTC,LastTransitionTime:2024-02-15 12:43:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.22.0.2,},NodeAddress{Type:Hostname,Address:v126-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d18212626141459c831725483d7679ab,SystemUUID:398bd568-4555-4b1a-8660-f75be5056848,BootID:3ece24be-6f26-4926-9346-83e0950952a5,KernelVersion:5.15.0-53-generic,OSImage:Debian GNU/Linux 11 (bullseye),ContainerRuntimeVersion:containerd://1.7.1,KubeletVersion:v1.26.6,KubeProxyVersion:v1.26.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[docker.io/litmuschaos/go-runner@sha256:b4aaa2ee36bf687dd0f147ced7dce708398fae6d8410067c9ad9a90f162d55e5 docker.io/litmuschaos/go-runner:2.14.0],SizeBytes:170207512,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ba23b53b0e943e1556160fd3d7e445268699b578d6d1ffcce645a3cfafebb3db registry.k8s.io/kube-apiserver:v1.26.6],SizeBytes:80511487,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:719bf3e60c026520ed06d4f65a6df78f53a838e8675c058f25582d5067117d99 registry.k8s.io/kube-controller-manager:v1.26.6],SizeBytes:68657293,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ffc23ebf13cca095f49e15bc00a2fc75fc6ae75e10104169680d9cac711339b8 registry.k8s.io/kube-proxy:v1.26.6],SizeBytes:67229690,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:b67a0068e2439c496b04bb021d953b966868421451aa88f2c3701c6b4ab77d4f registry.k8s.io/kube-scheduler:v1.26.6],SizeBytes:57880717,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2 registry.k8s.io/etcd:3.5.10-0],SizeBytes:56649232,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:e64fe49f059f513a09c772a8972172b2af6833d092c06cc311171d7135e4525a docker.io/aquasec/kube-hunter:0.6.8],SizeBytes:38278203,},ContainerImage{Names:[docker.io/litmuschaos/chaos-operator@sha256:69b1a6ff1409fc80cf169503e29d10e049b46108e57436e452e3800f5f911d70 docker.io/litmuschaos/chaos-operator:2.14.0],SizeBytes:28963838,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230511-dc714da8],SizeBytes:27731571,},ContainerImage{Names:[docker.io/litmuschaos/chaos-runner@sha256:a5fcf3f1766975ec6e4730c0aefdf9705af20c67d9ff68372168c8856acba7af docker.io/litmuschaos/chaos-runner:2.14.0],SizeBytes:26125622,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20230511-dc714da8],SizeBytes:19351145,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:23d4ae0566b98dfee53d4b7a9ef824b6ed1c6b3a8f52bab927e5521f871b5104 docker.io/aquasec/kube-bench:v0.6.10],SizeBytes:18243491,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:fc259355994e6c6c1025a7cd2d1bdbf201708e9e11ef1dfd3ef787a7ce45730d registry.k8s.io/build-image/distroless-iptables:v0.2.9],SizeBytes:9501695,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230510-486859a6],SizeBytes:3052318,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 11 16:51:19.455: INFO: Logging kubelet events for node v126-worker Apr 11 16:51:19.458: INFO: Logging pods the kubelet thinks is on node v126-worker Apr 11 16:51:19.480: INFO: create-loop-devs-qf7hw started at 2024-02-15 12:43:26 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.480: INFO: Container loopdev ready: true, restart count 0 Apr 11 16:51:19.480: INFO: kindnet-llt78 started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.480: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 16:51:19.480: INFO: kube-proxy-6gjpv started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.480: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 16:51:19.706: INFO: Latency metrics for node v126-worker Apr 11 16:51:19.706: INFO: Logging node info for node v126-worker2 Apr 11 16:51:19.709: INFO: Node Info: &Node{ObjectMeta:{v126-worker2 325f688d-d472-4d00-af05-b1602ff4d011 7510270 0 2024-02-15 12:43:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v126-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:v126-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2024-02-15 12:43:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2024-02-15 12:43:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2024-03-23 10:52:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2024-04-11 16:46:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {e2e.test Update v1 2024-04-11 16:50:32 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakecpu":{}},"f:capacity":{"f:example.com/fakecpu":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v126/v126-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2024-04-11 16:46:55 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2024-04-11 16:46:55 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2024-04-11 16:46:55 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2024-04-11 16:46:55 +0000 UTC,LastTransitionTime:2024-02-15 12:43:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.22.0.3,},NodeAddress{Type:Hostname,Address:v126-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9a4f500a92ab44e68eb943ba261bf2b3,SystemUUID:3a962073-037f-4c28-a122-8f4b5dfc4ca0,BootID:3ece24be-6f26-4926-9346-83e0950952a5,KernelVersion:5.15.0-53-generic,OSImage:Debian GNU/Linux 11 (bullseye),ContainerRuntimeVersion:containerd://1.7.1,KubeletVersion:v1.26.6,KubeProxyVersion:v1.26.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/litmuschaos/go-runner@sha256:b4aaa2ee36bf687dd0f147ced7dce708398fae6d8410067c9ad9a90f162d55e5 docker.io/litmuschaos/go-runner:2.14.0],SizeBytes:170207512,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ba23b53b0e943e1556160fd3d7e445268699b578d6d1ffcce645a3cfafebb3db registry.k8s.io/kube-apiserver:v1.26.6],SizeBytes:80511487,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:719bf3e60c026520ed06d4f65a6df78f53a838e8675c058f25582d5067117d99 registry.k8s.io/kube-controller-manager:v1.26.6],SizeBytes:68657293,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ffc23ebf13cca095f49e15bc00a2fc75fc6ae75e10104169680d9cac711339b8 registry.k8s.io/kube-proxy:v1.26.6],SizeBytes:67229690,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:b67a0068e2439c496b04bb021d953b966868421451aa88f2c3701c6b4ab77d4f registry.k8s.io/kube-scheduler:v1.26.6],SizeBytes:57880717,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2 registry.k8s.io/etcd:3.5.10-0],SizeBytes:56649232,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/litmuschaos/chaos-operator@sha256:69b1a6ff1409fc80cf169503e29d10e049b46108e57436e452e3800f5f911d70 docker.io/litmuschaos/chaos-operator:2.14.0],SizeBytes:28963838,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230511-dc714da8],SizeBytes:27731571,},ContainerImage{Names:[docker.io/litmuschaos/chaos-runner@sha256:a5fcf3f1766975ec6e4730c0aefdf9705af20c67d9ff68372168c8856acba7af docker.io/litmuschaos/chaos-runner:2.14.0],SizeBytes:26125622,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20230511-dc714da8],SizeBytes:19351145,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230510-486859a6],SizeBytes:3052318,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 11 16:51:19.710: INFO: Logging kubelet events for node v126-worker2 Apr 11 16:51:19.714: INFO: Logging pods the kubelet thinks is on node v126-worker2 Apr 11 16:51:19.737: INFO: create-loop-devs-tmv9n started at 2024-02-15 12:43:26 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.737: INFO: Container loopdev ready: true, restart count 0 Apr 11 16:51:19.737: INFO: kube-proxy-zhx9l started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.737: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 16:51:19.737: INFO: kindnet-l6j8p started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 16:51:19.737: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 16:51:20.072: INFO: Latency metrics for node v126-worker2 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-8619" for this suite. 04/11/24 16:51:20.072 << End Captured GinkgoWriter Output Apr 11 16:51:19.319: Conformance test suite needs a cluster with at least 2 nodes. Expected : 1 to be > : 1 In [It] at: test/e2e/apps/daemon_set.go:446 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance] test/e2e/apps/controller_revision.go:124 [BeforeEach] [sig-apps] ControllerRevision [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:51:20.082 Apr 11 16:51:20.082: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename controllerrevisions 04/11/24 16:51:20.084 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:51:20.095 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:51:20.098 [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:93 [It] should manage the lifecycle of a ControllerRevision [Conformance] test/e2e/apps/controller_revision.go:124 STEP: Creating DaemonSet "e2e-r4lvm-daemon-set" 04/11/24 16:51:20.116 STEP: Check that daemon pods launch on every node of the cluster. 04/11/24 16:51:20.121 Apr 11 16:51:20.126: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:51:20.129: INFO: Number of nodes with available pods controlled by daemonset e2e-r4lvm-daemon-set: 0 Apr 11 16:51:20.129: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:51:21.135: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:51:21.139: INFO: Number of nodes with available pods controlled by daemonset e2e-r4lvm-daemon-set: 1 Apr 11 16:51:21.139: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:51:22.135: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:51:22.139: INFO: Number of nodes with available pods controlled by daemonset e2e-r4lvm-daemon-set: 2 Apr 11 16:51:22.139: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset e2e-r4lvm-daemon-set STEP: Confirm DaemonSet "e2e-r4lvm-daemon-set" successfully created with "daemonset-name=e2e-r4lvm-daemon-set" label 04/11/24 16:51:22.143 STEP: Listing all ControllerRevisions with label "daemonset-name=e2e-r4lvm-daemon-set" 04/11/24 16:51:22.15 Apr 11 16:51:22.153: INFO: Located ControllerRevision: "e2e-r4lvm-daemon-set-6cc868bbb4" STEP: Patching ControllerRevision "e2e-r4lvm-daemon-set-6cc868bbb4" 04/11/24 16:51:22.156 Apr 11 16:51:22.163: INFO: e2e-r4lvm-daemon-set-6cc868bbb4 has been patched STEP: Create a new ControllerRevision 04/11/24 16:51:22.163 Apr 11 16:51:22.169: INFO: Created ControllerRevision: e2e-r4lvm-daemon-set-c56f5c646 STEP: Confirm that there are two ControllerRevisions 04/11/24 16:51:22.169 Apr 11 16:51:22.170: INFO: Requesting list of ControllerRevisions to confirm quantity Apr 11 16:51:22.173: INFO: Found 2 ControllerRevisions STEP: Deleting ControllerRevision "e2e-r4lvm-daemon-set-6cc868bbb4" 04/11/24 16:51:22.173 STEP: Confirm that there is only one ControllerRevision 04/11/24 16:51:22.178 Apr 11 16:51:22.178: INFO: Requesting list of ControllerRevisions to confirm quantity Apr 11 16:51:22.182: INFO: Found 1 ControllerRevisions STEP: Updating ControllerRevision "e2e-r4lvm-daemon-set-c56f5c646" 04/11/24 16:51:22.185 Apr 11 16:51:22.193: INFO: e2e-r4lvm-daemon-set-c56f5c646 has been updated STEP: Generate another ControllerRevision by patching the Daemonset 04/11/24 16:51:22.193 W0411 16:51:22.202163 16 warnings.go:70] unknown field "updateStrategy" STEP: Confirm that there are two ControllerRevisions 04/11/24 16:51:22.202 Apr 11 16:51:22.202: INFO: Requesting list of ControllerRevisions to confirm quantity Apr 11 16:51:23.205: INFO: Requesting list of ControllerRevisions to confirm quantity Apr 11 16:51:23.209: INFO: Found 2 ControllerRevisions STEP: Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-r4lvm-daemon-set-c56f5c646=updated" 04/11/24 16:51:23.21 STEP: Confirm that there is only one ControllerRevision 04/11/24 16:51:23.216 Apr 11 16:51:23.217: INFO: Requesting list of ControllerRevisions to confirm quantity Apr 11 16:51:23.220: INFO: Found 1 ControllerRevisions Apr 11 16:51:23.223: INFO: ControllerRevision "e2e-r4lvm-daemon-set-677bc6f57f" has revision 3 [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:58 STEP: Deleting DaemonSet "e2e-r4lvm-daemon-set" 04/11/24 16:51:23.226 STEP: deleting DaemonSet.extensions e2e-r4lvm-daemon-set in namespace controllerrevisions-3888, will wait for the garbage collector to delete the pods 04/11/24 16:51:23.226 Apr 11 16:51:23.285: INFO: Deleting DaemonSet.extensions e2e-r4lvm-daemon-set took: 4.766024ms Apr 11 16:51:23.386: INFO: Terminating DaemonSet.extensions e2e-r4lvm-daemon-set pods took: 101.065553ms Apr 11 16:51:24.290: INFO: Number of nodes with available pods controlled by daemonset e2e-r4lvm-daemon-set: 0 Apr 11 16:51:24.290: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-r4lvm-daemon-set Apr 11 16:51:24.293: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"7510337"},"items":null} Apr 11 16:51:24.296: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"7510337"},"items":null} [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:51:24.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "controllerrevisions-3888" for this suite. 04/11/24 16:51:24.308 ------------------------------ • [4.232 seconds] [sig-apps] ControllerRevision [Serial] test/e2e/apps/framework.go:23 should manage the lifecycle of a ControllerRevision [Conformance] test/e2e/apps/controller_revision.go:124 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] ControllerRevision [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:51:20.082 Apr 11 16:51:20.082: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename controllerrevisions 04/11/24 16:51:20.084 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:51:20.095 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:51:20.098 [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:93 [It] should manage the lifecycle of a ControllerRevision [Conformance] test/e2e/apps/controller_revision.go:124 STEP: Creating DaemonSet "e2e-r4lvm-daemon-set" 04/11/24 16:51:20.116 STEP: Check that daemon pods launch on every node of the cluster. 04/11/24 16:51:20.121 Apr 11 16:51:20.126: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:51:20.129: INFO: Number of nodes with available pods controlled by daemonset e2e-r4lvm-daemon-set: 0 Apr 11 16:51:20.129: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:51:21.135: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:51:21.139: INFO: Number of nodes with available pods controlled by daemonset e2e-r4lvm-daemon-set: 1 Apr 11 16:51:21.139: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:51:22.135: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:51:22.139: INFO: Number of nodes with available pods controlled by daemonset e2e-r4lvm-daemon-set: 2 Apr 11 16:51:22.139: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset e2e-r4lvm-daemon-set STEP: Confirm DaemonSet "e2e-r4lvm-daemon-set" successfully created with "daemonset-name=e2e-r4lvm-daemon-set" label 04/11/24 16:51:22.143 STEP: Listing all ControllerRevisions with label "daemonset-name=e2e-r4lvm-daemon-set" 04/11/24 16:51:22.15 Apr 11 16:51:22.153: INFO: Located ControllerRevision: "e2e-r4lvm-daemon-set-6cc868bbb4" STEP: Patching ControllerRevision "e2e-r4lvm-daemon-set-6cc868bbb4" 04/11/24 16:51:22.156 Apr 11 16:51:22.163: INFO: e2e-r4lvm-daemon-set-6cc868bbb4 has been patched STEP: Create a new ControllerRevision 04/11/24 16:51:22.163 Apr 11 16:51:22.169: INFO: Created ControllerRevision: e2e-r4lvm-daemon-set-c56f5c646 STEP: Confirm that there are two ControllerRevisions 04/11/24 16:51:22.169 Apr 11 16:51:22.170: INFO: Requesting list of ControllerRevisions to confirm quantity Apr 11 16:51:22.173: INFO: Found 2 ControllerRevisions STEP: Deleting ControllerRevision "e2e-r4lvm-daemon-set-6cc868bbb4" 04/11/24 16:51:22.173 STEP: Confirm that there is only one ControllerRevision 04/11/24 16:51:22.178 Apr 11 16:51:22.178: INFO: Requesting list of ControllerRevisions to confirm quantity Apr 11 16:51:22.182: INFO: Found 1 ControllerRevisions STEP: Updating ControllerRevision "e2e-r4lvm-daemon-set-c56f5c646" 04/11/24 16:51:22.185 Apr 11 16:51:22.193: INFO: e2e-r4lvm-daemon-set-c56f5c646 has been updated STEP: Generate another ControllerRevision by patching the Daemonset 04/11/24 16:51:22.193 W0411 16:51:22.202163 16 warnings.go:70] unknown field "updateStrategy" STEP: Confirm that there are two ControllerRevisions 04/11/24 16:51:22.202 Apr 11 16:51:22.202: INFO: Requesting list of ControllerRevisions to confirm quantity Apr 11 16:51:23.205: INFO: Requesting list of ControllerRevisions to confirm quantity Apr 11 16:51:23.209: INFO: Found 2 ControllerRevisions STEP: Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-r4lvm-daemon-set-c56f5c646=updated" 04/11/24 16:51:23.21 STEP: Confirm that there is only one ControllerRevision 04/11/24 16:51:23.216 Apr 11 16:51:23.217: INFO: Requesting list of ControllerRevisions to confirm quantity Apr 11 16:51:23.220: INFO: Found 1 ControllerRevisions Apr 11 16:51:23.223: INFO: ControllerRevision "e2e-r4lvm-daemon-set-677bc6f57f" has revision 3 [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:58 STEP: Deleting DaemonSet "e2e-r4lvm-daemon-set" 04/11/24 16:51:23.226 STEP: deleting DaemonSet.extensions e2e-r4lvm-daemon-set in namespace controllerrevisions-3888, will wait for the garbage collector to delete the pods 04/11/24 16:51:23.226 Apr 11 16:51:23.285: INFO: Deleting DaemonSet.extensions e2e-r4lvm-daemon-set took: 4.766024ms Apr 11 16:51:23.386: INFO: Terminating DaemonSet.extensions e2e-r4lvm-daemon-set pods took: 101.065553ms Apr 11 16:51:24.290: INFO: Number of nodes with available pods controlled by daemonset e2e-r4lvm-daemon-set: 0 Apr 11 16:51:24.290: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-r4lvm-daemon-set Apr 11 16:51:24.293: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"7510337"},"items":null} Apr 11 16:51:24.296: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"7510337"},"items":null} [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:51:24.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "controllerrevisions-3888" for this suite. 04/11/24 16:51:24.308 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/scheduling/predicates.go:704 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:51:24.327 Apr 11 16:51:24.327: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 16:51:24.329 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:51:24.34 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:51:24.344 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 16:51:24.348: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 16:51:24.356: INFO: Waiting for terminating namespaces to be deleted... Apr 11 16:51:24.360: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 16:51:24.366: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 16:51:24.366: INFO: Container loopdev ready: true, restart count 0 Apr 11 16:51:24.366: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 16:51:24.366: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 16:51:24.366: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 16:51:24.366: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/scheduling/predicates.go:704 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/11/24 16:51:24.366 Apr 11 16:51:24.373: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-6679" to be "running" Apr 11 16:51:24.376: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.714915ms Apr 11 16:51:26.381: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.00736324s Apr 11 16:51:26.381: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/11/24 16:51:26.384 STEP: Trying to apply a random label on the found node. 04/11/24 16:51:26.396 STEP: verifying the node has the label kubernetes.io/e2e-a8a3436b-ec85-4f1a-9159-9ace4a035d23 95 04/11/24 16:51:26.409 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled 04/11/24 16:51:26.413 Apr 11 16:51:26.418: INFO: Waiting up to 5m0s for pod "pod4" in namespace "sched-pred-6679" to be "not pending" Apr 11 16:51:26.422: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.115989ms Apr 11 16:51:28.426: INFO: Pod "pod4": Phase="Running", Reason="", readiness=false. Elapsed: 2.007521003s Apr 11 16:51:28.426: INFO: Pod "pod4" satisfied condition "not pending" STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.22.0.3 on the node which pod4 resides and expect not scheduled 04/11/24 16:51:28.426 Apr 11 16:51:28.432: INFO: Waiting up to 5m0s for pod "pod5" in namespace "sched-pred-6679" to be "not pending" Apr 11 16:51:28.435: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.16548ms Apr 11 16:51:30.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007543335s Apr 11 16:51:32.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007788016s Apr 11 16:51:34.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008741046s Apr 11 16:51:36.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.008620206s Apr 11 16:51:38.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.009350377s Apr 11 16:51:40.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.008043419s Apr 11 16:51:42.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.008107767s Apr 11 16:51:44.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.008729565s Apr 11 16:51:46.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.008487423s Apr 11 16:51:48.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.009251392s Apr 11 16:51:50.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.008656811s Apr 11 16:51:52.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.007793511s Apr 11 16:51:54.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.008694286s Apr 11 16:51:56.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.008757469s Apr 11 16:51:58.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.008839998s Apr 11 16:52:00.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.009098892s Apr 11 16:52:02.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.008181789s Apr 11 16:52:04.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.008975745s Apr 11 16:52:06.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.008447154s Apr 11 16:52:08.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.009421457s Apr 11 16:52:10.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.008513852s Apr 11 16:52:12.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.007331578s Apr 11 16:52:14.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.008799422s Apr 11 16:52:16.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.008937423s Apr 11 16:52:18.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.008519445s Apr 11 16:52:20.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.00791618s Apr 11 16:52:22.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.008002088s Apr 11 16:52:24.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.009288371s Apr 11 16:52:26.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.00906064s Apr 11 16:52:28.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.009002753s Apr 11 16:52:30.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.009271817s Apr 11 16:52:32.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.007519687s Apr 11 16:52:34.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.008104333s Apr 11 16:52:36.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.008045759s Apr 11 16:52:38.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.008526258s Apr 11 16:52:40.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.007593999s Apr 11 16:52:42.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.007813493s Apr 11 16:52:44.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.007309743s Apr 11 16:52:46.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.009399901s Apr 11 16:52:48.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.009084282s Apr 11 16:52:50.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.009036394s Apr 11 16:52:52.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.007559387s Apr 11 16:52:54.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.007467694s Apr 11 16:52:56.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.008641577s Apr 11 16:52:58.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.007594889s Apr 11 16:53:00.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.008398278s Apr 11 16:53:02.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.007750344s Apr 11 16:53:04.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.007449092s Apr 11 16:53:06.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.007909928s Apr 11 16:53:08.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.00809595s Apr 11 16:53:10.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.007399938s Apr 11 16:53:12.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.007632388s Apr 11 16:53:14.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.007260357s Apr 11 16:53:16.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.009255783s Apr 11 16:53:18.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.007383189s Apr 11 16:53:20.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.007082113s Apr 11 16:53:22.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.008214328s Apr 11 16:53:24.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.009416224s Apr 11 16:53:26.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.009269922s Apr 11 16:53:28.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.008741468s Apr 11 16:53:30.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.00830638s Apr 11 16:53:32.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.007857384s Apr 11 16:53:34.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.008692592s Apr 11 16:53:36.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.008905498s Apr 11 16:53:38.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.008752882s Apr 11 16:53:40.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.007214093s Apr 11 16:53:42.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.007342482s Apr 11 16:53:44.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.009127965s Apr 11 16:53:46.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.008956597s Apr 11 16:53:48.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.008820229s Apr 11 16:53:50.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.00828454s Apr 11 16:53:52.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.007404067s Apr 11 16:53:54.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.009000846s Apr 11 16:53:56.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.008812405s Apr 11 16:53:58.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.007619475s Apr 11 16:54:00.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.008210128s Apr 11 16:54:02.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.007444383s Apr 11 16:54:04.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.008206973s Apr 11 16:54:06.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.008778917s Apr 11 16:54:08.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.008165553s Apr 11 16:54:10.442: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.009873251s Apr 11 16:54:12.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.007747948s Apr 11 16:54:14.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.008191436s Apr 11 16:54:16.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.009188679s Apr 11 16:54:18.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.007565293s Apr 11 16:54:20.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.008256246s Apr 11 16:54:22.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.007211376s Apr 11 16:54:24.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.00924853s Apr 11 16:54:26.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.009064913s Apr 11 16:54:28.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.008891954s Apr 11 16:54:30.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.008632198s Apr 11 16:54:32.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.008157102s Apr 11 16:54:34.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.008768238s Apr 11 16:54:36.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.00871556s Apr 11 16:54:38.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.007370549s Apr 11 16:54:40.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.007909169s Apr 11 16:54:42.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.007996808s Apr 11 16:54:44.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.008958757s Apr 11 16:54:46.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.008961545s Apr 11 16:54:48.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.007772571s Apr 11 16:54:50.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.00743354s Apr 11 16:54:52.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.007338336s Apr 11 16:54:54.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.009190652s Apr 11 16:54:56.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.009385354s Apr 11 16:54:58.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.008789084s Apr 11 16:55:00.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.009166346s Apr 11 16:55:02.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.00743707s Apr 11 16:55:04.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.008017501s Apr 11 16:55:06.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.007374096s Apr 11 16:55:08.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.007679437s Apr 11 16:55:10.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.008529931s Apr 11 16:55:12.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.00806121s Apr 11 16:55:14.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.008658351s Apr 11 16:55:16.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.008476077s Apr 11 16:55:18.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.007567769s Apr 11 16:55:20.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.008509368s Apr 11 16:55:22.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.007474331s Apr 11 16:55:24.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.008050875s Apr 11 16:55:26.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.008754932s Apr 11 16:55:28.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.008198564s Apr 11 16:55:30.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.009202544s Apr 11 16:55:32.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.007951039s Apr 11 16:55:34.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.00867379s Apr 11 16:55:36.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.008296151s Apr 11 16:55:38.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.009193235s Apr 11 16:55:40.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.00905428s Apr 11 16:55:42.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.007642476s Apr 11 16:55:44.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.008387176s Apr 11 16:55:46.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.009028494s Apr 11 16:55:48.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.009039932s Apr 11 16:55:50.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.007734074s Apr 11 16:55:52.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.007670931s Apr 11 16:55:54.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.008238593s Apr 11 16:55:56.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.009177768s Apr 11 16:55:58.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.008930772s Apr 11 16:56:00.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.008790191s Apr 11 16:56:02.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.00794823s Apr 11 16:56:04.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.008522585s Apr 11 16:56:06.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.008622492s Apr 11 16:56:08.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.007835378s Apr 11 16:56:10.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.008856317s Apr 11 16:56:12.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.007387333s Apr 11 16:56:14.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.009100305s Apr 11 16:56:16.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.009072114s Apr 11 16:56:18.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.008359676s Apr 11 16:56:20.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.009451887s Apr 11 16:56:22.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.00712696s Apr 11 16:56:24.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.00754205s Apr 11 16:56:26.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.007218562s Apr 11 16:56:28.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.007710596s Apr 11 16:56:28.443: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.01075945s STEP: removing the label kubernetes.io/e2e-a8a3436b-ec85-4f1a-9159-9ace4a035d23 off the node v126-worker2 04/11/24 16:56:28.443 STEP: verifying the node doesn't have the label kubernetes.io/e2e-a8a3436b-ec85-4f1a-9159-9ace4a035d23 04/11/24 16:56:28.459 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:56:28.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-6679" for this suite. 04/11/24 16:56:28.468 ------------------------------ • [SLOW TEST] [304.147 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/scheduling/predicates.go:704 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:51:24.327 Apr 11 16:51:24.327: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 16:51:24.329 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:51:24.34 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:51:24.344 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 16:51:24.348: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 16:51:24.356: INFO: Waiting for terminating namespaces to be deleted... Apr 11 16:51:24.360: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 16:51:24.366: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 16:51:24.366: INFO: Container loopdev ready: true, restart count 0 Apr 11 16:51:24.366: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 16:51:24.366: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 16:51:24.366: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 16:51:24.366: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/scheduling/predicates.go:704 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/11/24 16:51:24.366 Apr 11 16:51:24.373: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-6679" to be "running" Apr 11 16:51:24.376: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.714915ms Apr 11 16:51:26.381: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.00736324s Apr 11 16:51:26.381: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/11/24 16:51:26.384 STEP: Trying to apply a random label on the found node. 04/11/24 16:51:26.396 STEP: verifying the node has the label kubernetes.io/e2e-a8a3436b-ec85-4f1a-9159-9ace4a035d23 95 04/11/24 16:51:26.409 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled 04/11/24 16:51:26.413 Apr 11 16:51:26.418: INFO: Waiting up to 5m0s for pod "pod4" in namespace "sched-pred-6679" to be "not pending" Apr 11 16:51:26.422: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.115989ms Apr 11 16:51:28.426: INFO: Pod "pod4": Phase="Running", Reason="", readiness=false. Elapsed: 2.007521003s Apr 11 16:51:28.426: INFO: Pod "pod4" satisfied condition "not pending" STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.22.0.3 on the node which pod4 resides and expect not scheduled 04/11/24 16:51:28.426 Apr 11 16:51:28.432: INFO: Waiting up to 5m0s for pod "pod5" in namespace "sched-pred-6679" to be "not pending" Apr 11 16:51:28.435: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.16548ms Apr 11 16:51:30.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007543335s Apr 11 16:51:32.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007788016s Apr 11 16:51:34.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008741046s Apr 11 16:51:36.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.008620206s Apr 11 16:51:38.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.009350377s Apr 11 16:51:40.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.008043419s Apr 11 16:51:42.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.008107767s Apr 11 16:51:44.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.008729565s Apr 11 16:51:46.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.008487423s Apr 11 16:51:48.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.009251392s Apr 11 16:51:50.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.008656811s Apr 11 16:51:52.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.007793511s Apr 11 16:51:54.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.008694286s Apr 11 16:51:56.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.008757469s Apr 11 16:51:58.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.008839998s Apr 11 16:52:00.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.009098892s Apr 11 16:52:02.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.008181789s Apr 11 16:52:04.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.008975745s Apr 11 16:52:06.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.008447154s Apr 11 16:52:08.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.009421457s Apr 11 16:52:10.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.008513852s Apr 11 16:52:12.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.007331578s Apr 11 16:52:14.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.008799422s Apr 11 16:52:16.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.008937423s Apr 11 16:52:18.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.008519445s Apr 11 16:52:20.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.00791618s Apr 11 16:52:22.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.008002088s Apr 11 16:52:24.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.009288371s Apr 11 16:52:26.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.00906064s Apr 11 16:52:28.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.009002753s Apr 11 16:52:30.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.009271817s Apr 11 16:52:32.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.007519687s Apr 11 16:52:34.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.008104333s Apr 11 16:52:36.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.008045759s Apr 11 16:52:38.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.008526258s Apr 11 16:52:40.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.007593999s Apr 11 16:52:42.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.007813493s Apr 11 16:52:44.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.007309743s Apr 11 16:52:46.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.009399901s Apr 11 16:52:48.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.009084282s Apr 11 16:52:50.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.009036394s Apr 11 16:52:52.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.007559387s Apr 11 16:52:54.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.007467694s Apr 11 16:52:56.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.008641577s Apr 11 16:52:58.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.007594889s Apr 11 16:53:00.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.008398278s Apr 11 16:53:02.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.007750344s Apr 11 16:53:04.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.007449092s Apr 11 16:53:06.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.007909928s Apr 11 16:53:08.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.00809595s Apr 11 16:53:10.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.007399938s Apr 11 16:53:12.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.007632388s Apr 11 16:53:14.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.007260357s Apr 11 16:53:16.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.009255783s Apr 11 16:53:18.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.007383189s Apr 11 16:53:20.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.007082113s Apr 11 16:53:22.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.008214328s Apr 11 16:53:24.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.009416224s Apr 11 16:53:26.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.009269922s Apr 11 16:53:28.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.008741468s Apr 11 16:53:30.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.00830638s Apr 11 16:53:32.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.007857384s Apr 11 16:53:34.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.008692592s Apr 11 16:53:36.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.008905498s Apr 11 16:53:38.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.008752882s Apr 11 16:53:40.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.007214093s Apr 11 16:53:42.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.007342482s Apr 11 16:53:44.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.009127965s Apr 11 16:53:46.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.008956597s Apr 11 16:53:48.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.008820229s Apr 11 16:53:50.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.00828454s Apr 11 16:53:52.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.007404067s Apr 11 16:53:54.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.009000846s Apr 11 16:53:56.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.008812405s Apr 11 16:53:58.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.007619475s Apr 11 16:54:00.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.008210128s Apr 11 16:54:02.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.007444383s Apr 11 16:54:04.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.008206973s Apr 11 16:54:06.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.008778917s Apr 11 16:54:08.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.008165553s Apr 11 16:54:10.442: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.009873251s Apr 11 16:54:12.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.007747948s Apr 11 16:54:14.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.008191436s Apr 11 16:54:16.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.009188679s Apr 11 16:54:18.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.007565293s Apr 11 16:54:20.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.008256246s Apr 11 16:54:22.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.007211376s Apr 11 16:54:24.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.00924853s Apr 11 16:54:26.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.009064913s Apr 11 16:54:28.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.008891954s Apr 11 16:54:30.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.008632198s Apr 11 16:54:32.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.008157102s Apr 11 16:54:34.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.008768238s Apr 11 16:54:36.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.00871556s Apr 11 16:54:38.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.007370549s Apr 11 16:54:40.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.007909169s Apr 11 16:54:42.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.007996808s Apr 11 16:54:44.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.008958757s Apr 11 16:54:46.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.008961545s Apr 11 16:54:48.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.007772571s Apr 11 16:54:50.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.00743354s Apr 11 16:54:52.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.007338336s Apr 11 16:54:54.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.009190652s Apr 11 16:54:56.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.009385354s Apr 11 16:54:58.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.008789084s Apr 11 16:55:00.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.009166346s Apr 11 16:55:02.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.00743707s Apr 11 16:55:04.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.008017501s Apr 11 16:55:06.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.007374096s Apr 11 16:55:08.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.007679437s Apr 11 16:55:10.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.008529931s Apr 11 16:55:12.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.00806121s Apr 11 16:55:14.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.008658351s Apr 11 16:55:16.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.008476077s Apr 11 16:55:18.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.007567769s Apr 11 16:55:20.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.008509368s Apr 11 16:55:22.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.007474331s Apr 11 16:55:24.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.008050875s Apr 11 16:55:26.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.008754932s Apr 11 16:55:28.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.008198564s Apr 11 16:55:30.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.009202544s Apr 11 16:55:32.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.007951039s Apr 11 16:55:34.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.00867379s Apr 11 16:55:36.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.008296151s Apr 11 16:55:38.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.009193235s Apr 11 16:55:40.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.00905428s Apr 11 16:55:42.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.007642476s Apr 11 16:55:44.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.008387176s Apr 11 16:55:46.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.009028494s Apr 11 16:55:48.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.009039932s Apr 11 16:55:50.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.007734074s Apr 11 16:55:52.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.007670931s Apr 11 16:55:54.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.008238593s Apr 11 16:55:56.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.009177768s Apr 11 16:55:58.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.008930772s Apr 11 16:56:00.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.008790191s Apr 11 16:56:02.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.00794823s Apr 11 16:56:04.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.008522585s Apr 11 16:56:06.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.008622492s Apr 11 16:56:08.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.007835378s Apr 11 16:56:10.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.008856317s Apr 11 16:56:12.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.007387333s Apr 11 16:56:14.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.009100305s Apr 11 16:56:16.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.009072114s Apr 11 16:56:18.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.008359676s Apr 11 16:56:20.441: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.009451887s Apr 11 16:56:22.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.00712696s Apr 11 16:56:24.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.00754205s Apr 11 16:56:26.439: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.007218562s Apr 11 16:56:28.440: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.007710596s Apr 11 16:56:28.443: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.01075945s STEP: removing the label kubernetes.io/e2e-a8a3436b-ec85-4f1a-9159-9ace4a035d23 off the node v126-worker2 04/11/24 16:56:28.443 STEP: verifying the node doesn't have the label kubernetes.io/e2e-a8a3436b-ec85-4f1a-9159-9ace4a035d23 04/11/24 16:56:28.459 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:56:28.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-6679" for this suite. 04/11/24 16:56:28.468 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] test/e2e/storage/empty_dir_wrapper.go:189 [BeforeEach] [sig-storage] EmptyDir wrapper volumes set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:56:28.504 Apr 11 16:56:28.504: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper 04/11/24 16:56:28.506 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:56:28.518 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:56:28.522 [BeforeEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/metrics/init/init.go:31 [It] should not cause race condition when used for configmaps [Serial] [Conformance] test/e2e/storage/empty_dir_wrapper.go:189 STEP: Creating 50 configmaps 04/11/24 16:56:28.526 STEP: Creating RC which spawns configmap-volume pods 04/11/24 16:56:28.762 Apr 11 16:56:28.866: INFO: Pod name wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac: Found 5 pods out of 5 STEP: Ensuring each pod is running 04/11/24 16:56:28.866 Apr 11 16:56:28.866: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-54x2g" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:56:28.911: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-54x2g": Phase="Pending", Reason="", readiness=false. Elapsed: 44.881409ms Apr 11 16:56:30.916: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-54x2g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050309319s Apr 11 16:56:32.917: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-54x2g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051145098s Apr 11 16:56:34.917: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-54x2g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050798693s Apr 11 16:56:36.917: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-54x2g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050649789s Apr 11 16:56:38.916: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-54x2g": Phase="Pending", Reason="", readiness=false. Elapsed: 10.050331504s Apr 11 16:56:40.917: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-54x2g": Phase="Pending", Reason="", readiness=false. Elapsed: 12.05051015s Apr 11 16:56:42.917: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-54x2g": Phase="Running", Reason="", readiness=true. Elapsed: 14.05046138s Apr 11 16:56:42.917: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-54x2g" satisfied condition "running" Apr 11 16:56:42.917: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-bj8dq" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:56:42.921: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-bj8dq": Phase="Running", Reason="", readiness=true. Elapsed: 4.522897ms Apr 11 16:56:42.921: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-bj8dq" satisfied condition "running" Apr 11 16:56:42.921: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-bk6wl" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:56:42.926: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-bk6wl": Phase="Running", Reason="", readiness=true. Elapsed: 4.577316ms Apr 11 16:56:42.926: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-bk6wl" satisfied condition "running" Apr 11 16:56:42.926: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-lr4cg" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:56:42.930: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-lr4cg": Phase="Running", Reason="", readiness=true. Elapsed: 4.475074ms Apr 11 16:56:42.930: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-lr4cg" satisfied condition "running" Apr 11 16:56:42.930: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-z4lmd" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:56:42.935: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-z4lmd": Phase="Running", Reason="", readiness=true. Elapsed: 4.222144ms Apr 11 16:56:42.935: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-z4lmd" satisfied condition "running" STEP: deleting ReplicationController wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac in namespace emptydir-wrapper-9735, will wait for the garbage collector to delete the pods 04/11/24 16:56:42.935 Apr 11 16:56:42.995: INFO: Deleting ReplicationController wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac took: 5.848495ms Apr 11 16:56:43.096: INFO: Terminating ReplicationController wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac pods took: 100.608195ms STEP: Creating RC which spawns configmap-volume pods 04/11/24 16:56:47.702 Apr 11 16:56:47.717: INFO: Pod name wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb: Found 0 pods out of 5 Apr 11 16:56:52.726: INFO: Pod name wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb: Found 5 pods out of 5 STEP: Ensuring each pod is running 04/11/24 16:56:52.726 Apr 11 16:56:52.727: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-2n8ld" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:56:52.731: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-2n8ld": Phase="Pending", Reason="", readiness=false. Elapsed: 4.35216ms Apr 11 16:56:54.737: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-2n8ld": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009987448s Apr 11 16:56:56.737: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-2n8ld": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010080697s Apr 11 16:56:58.739: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-2n8ld": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011970086s Apr 11 16:57:00.737: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-2n8ld": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009964358s Apr 11 16:57:02.737: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-2n8ld": Phase="Pending", Reason="", readiness=false. Elapsed: 10.010062152s Apr 11 16:57:04.738: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-2n8ld": Phase="Running", Reason="", readiness=true. Elapsed: 12.011405471s Apr 11 16:57:04.738: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-2n8ld" satisfied condition "running" Apr 11 16:57:04.738: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-8qvzb" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:57:04.743: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-8qvzb": Phase="Running", Reason="", readiness=true. Elapsed: 4.707259ms Apr 11 16:57:04.743: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-8qvzb" satisfied condition "running" Apr 11 16:57:04.743: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-8zvrj" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:57:04.747: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-8zvrj": Phase="Running", Reason="", readiness=true. Elapsed: 4.394815ms Apr 11 16:57:04.747: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-8zvrj" satisfied condition "running" Apr 11 16:57:04.747: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-gkkwk" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:57:04.752: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-gkkwk": Phase="Running", Reason="", readiness=true. Elapsed: 4.580257ms Apr 11 16:57:04.752: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-gkkwk" satisfied condition "running" Apr 11 16:57:04.752: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-kkwfd" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:57:04.756: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-kkwfd": Phase="Running", Reason="", readiness=true. Elapsed: 4.351572ms Apr 11 16:57:04.756: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-kkwfd" satisfied condition "running" STEP: deleting ReplicationController wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb in namespace emptydir-wrapper-9735, will wait for the garbage collector to delete the pods 04/11/24 16:57:04.756 Apr 11 16:57:04.818: INFO: Deleting ReplicationController wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb took: 6.467819ms Apr 11 16:57:04.919: INFO: Terminating ReplicationController wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb pods took: 100.999628ms STEP: Creating RC which spawns configmap-volume pods 04/11/24 16:57:07.524 Apr 11 16:57:07.542: INFO: Pod name wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f: Found 0 pods out of 5 Apr 11 16:57:12.551: INFO: Pod name wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f: Found 5 pods out of 5 STEP: Ensuring each pod is running 04/11/24 16:57:12.552 Apr 11 16:57:12.552: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-24tbm" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:57:12.556: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-24tbm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.440534ms Apr 11 16:57:14.562: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-24tbm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010270013s Apr 11 16:57:16.562: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-24tbm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010225587s Apr 11 16:57:18.563: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-24tbm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011045974s Apr 11 16:57:20.563: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-24tbm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.010993493s Apr 11 16:57:22.562: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-24tbm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.010386192s Apr 11 16:57:24.563: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-24tbm": Phase="Running", Reason="", readiness=true. Elapsed: 12.010820841s Apr 11 16:57:24.563: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-24tbm" satisfied condition "running" Apr 11 16:57:24.563: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-72znz" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:57:24.567: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-72znz": Phase="Running", Reason="", readiness=true. Elapsed: 4.647652ms Apr 11 16:57:24.567: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-72znz" satisfied condition "running" Apr 11 16:57:24.567: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-7mtxp" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:57:24.572: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-7mtxp": Phase="Running", Reason="", readiness=true. Elapsed: 4.317163ms Apr 11 16:57:24.572: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-7mtxp" satisfied condition "running" Apr 11 16:57:24.572: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-jjk2m" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:57:24.581: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-jjk2m": Phase="Running", Reason="", readiness=true. Elapsed: 9.003929ms Apr 11 16:57:24.581: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-jjk2m" satisfied condition "running" Apr 11 16:57:24.581: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-zqqbb" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:57:24.585: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-zqqbb": Phase="Running", Reason="", readiness=true. Elapsed: 4.163264ms Apr 11 16:57:24.585: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-zqqbb" satisfied condition "running" STEP: deleting ReplicationController wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f in namespace emptydir-wrapper-9735, will wait for the garbage collector to delete the pods 04/11/24 16:57:24.585 Apr 11 16:57:24.647: INFO: Deleting ReplicationController wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f took: 6.127649ms Apr 11 16:57:24.747: INFO: Terminating ReplicationController wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f pods took: 100.363347ms STEP: Cleaning up the configMaps 04/11/24 16:57:27.348 [AfterEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/node/init/init.go:32 Apr 11 16:57:27.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes tear down framework | framework.go:193 STEP: Destroying namespace "emptydir-wrapper-9735" for this suite. 04/11/24 16:57:27.593 ------------------------------ • [SLOW TEST] [59.094 seconds] [sig-storage] EmptyDir wrapper volumes test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] test/e2e/storage/empty_dir_wrapper.go:189 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] EmptyDir wrapper volumes set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:56:28.504 Apr 11 16:56:28.504: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper 04/11/24 16:56:28.506 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:56:28.518 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:56:28.522 [BeforeEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/metrics/init/init.go:31 [It] should not cause race condition when used for configmaps [Serial] [Conformance] test/e2e/storage/empty_dir_wrapper.go:189 STEP: Creating 50 configmaps 04/11/24 16:56:28.526 STEP: Creating RC which spawns configmap-volume pods 04/11/24 16:56:28.762 Apr 11 16:56:28.866: INFO: Pod name wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac: Found 5 pods out of 5 STEP: Ensuring each pod is running 04/11/24 16:56:28.866 Apr 11 16:56:28.866: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-54x2g" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:56:28.911: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-54x2g": Phase="Pending", Reason="", readiness=false. Elapsed: 44.881409ms Apr 11 16:56:30.916: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-54x2g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050309319s Apr 11 16:56:32.917: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-54x2g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051145098s Apr 11 16:56:34.917: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-54x2g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050798693s Apr 11 16:56:36.917: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-54x2g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050649789s Apr 11 16:56:38.916: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-54x2g": Phase="Pending", Reason="", readiness=false. Elapsed: 10.050331504s Apr 11 16:56:40.917: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-54x2g": Phase="Pending", Reason="", readiness=false. Elapsed: 12.05051015s Apr 11 16:56:42.917: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-54x2g": Phase="Running", Reason="", readiness=true. Elapsed: 14.05046138s Apr 11 16:56:42.917: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-54x2g" satisfied condition "running" Apr 11 16:56:42.917: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-bj8dq" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:56:42.921: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-bj8dq": Phase="Running", Reason="", readiness=true. Elapsed: 4.522897ms Apr 11 16:56:42.921: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-bj8dq" satisfied condition "running" Apr 11 16:56:42.921: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-bk6wl" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:56:42.926: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-bk6wl": Phase="Running", Reason="", readiness=true. Elapsed: 4.577316ms Apr 11 16:56:42.926: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-bk6wl" satisfied condition "running" Apr 11 16:56:42.926: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-lr4cg" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:56:42.930: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-lr4cg": Phase="Running", Reason="", readiness=true. Elapsed: 4.475074ms Apr 11 16:56:42.930: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-lr4cg" satisfied condition "running" Apr 11 16:56:42.930: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-z4lmd" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:56:42.935: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-z4lmd": Phase="Running", Reason="", readiness=true. Elapsed: 4.222144ms Apr 11 16:56:42.935: INFO: Pod "wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac-z4lmd" satisfied condition "running" STEP: deleting ReplicationController wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac in namespace emptydir-wrapper-9735, will wait for the garbage collector to delete the pods 04/11/24 16:56:42.935 Apr 11 16:56:42.995: INFO: Deleting ReplicationController wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac took: 5.848495ms Apr 11 16:56:43.096: INFO: Terminating ReplicationController wrapped-volume-race-ce7eabc6-16e2-4780-b260-aae0615083ac pods took: 100.608195ms STEP: Creating RC which spawns configmap-volume pods 04/11/24 16:56:47.702 Apr 11 16:56:47.717: INFO: Pod name wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb: Found 0 pods out of 5 Apr 11 16:56:52.726: INFO: Pod name wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb: Found 5 pods out of 5 STEP: Ensuring each pod is running 04/11/24 16:56:52.726 Apr 11 16:56:52.727: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-2n8ld" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:56:52.731: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-2n8ld": Phase="Pending", Reason="", readiness=false. Elapsed: 4.35216ms Apr 11 16:56:54.737: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-2n8ld": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009987448s Apr 11 16:56:56.737: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-2n8ld": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010080697s Apr 11 16:56:58.739: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-2n8ld": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011970086s Apr 11 16:57:00.737: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-2n8ld": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009964358s Apr 11 16:57:02.737: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-2n8ld": Phase="Pending", Reason="", readiness=false. Elapsed: 10.010062152s Apr 11 16:57:04.738: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-2n8ld": Phase="Running", Reason="", readiness=true. Elapsed: 12.011405471s Apr 11 16:57:04.738: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-2n8ld" satisfied condition "running" Apr 11 16:57:04.738: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-8qvzb" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:57:04.743: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-8qvzb": Phase="Running", Reason="", readiness=true. Elapsed: 4.707259ms Apr 11 16:57:04.743: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-8qvzb" satisfied condition "running" Apr 11 16:57:04.743: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-8zvrj" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:57:04.747: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-8zvrj": Phase="Running", Reason="", readiness=true. Elapsed: 4.394815ms Apr 11 16:57:04.747: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-8zvrj" satisfied condition "running" Apr 11 16:57:04.747: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-gkkwk" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:57:04.752: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-gkkwk": Phase="Running", Reason="", readiness=true. Elapsed: 4.580257ms Apr 11 16:57:04.752: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-gkkwk" satisfied condition "running" Apr 11 16:57:04.752: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-kkwfd" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:57:04.756: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-kkwfd": Phase="Running", Reason="", readiness=true. Elapsed: 4.351572ms Apr 11 16:57:04.756: INFO: Pod "wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb-kkwfd" satisfied condition "running" STEP: deleting ReplicationController wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb in namespace emptydir-wrapper-9735, will wait for the garbage collector to delete the pods 04/11/24 16:57:04.756 Apr 11 16:57:04.818: INFO: Deleting ReplicationController wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb took: 6.467819ms Apr 11 16:57:04.919: INFO: Terminating ReplicationController wrapped-volume-race-23dc2d90-1690-4d29-bef1-afb5e271dddb pods took: 100.999628ms STEP: Creating RC which spawns configmap-volume pods 04/11/24 16:57:07.524 Apr 11 16:57:07.542: INFO: Pod name wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f: Found 0 pods out of 5 Apr 11 16:57:12.551: INFO: Pod name wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f: Found 5 pods out of 5 STEP: Ensuring each pod is running 04/11/24 16:57:12.552 Apr 11 16:57:12.552: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-24tbm" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:57:12.556: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-24tbm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.440534ms Apr 11 16:57:14.562: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-24tbm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010270013s Apr 11 16:57:16.562: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-24tbm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010225587s Apr 11 16:57:18.563: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-24tbm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011045974s Apr 11 16:57:20.563: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-24tbm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.010993493s Apr 11 16:57:22.562: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-24tbm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.010386192s Apr 11 16:57:24.563: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-24tbm": Phase="Running", Reason="", readiness=true. Elapsed: 12.010820841s Apr 11 16:57:24.563: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-24tbm" satisfied condition "running" Apr 11 16:57:24.563: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-72znz" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:57:24.567: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-72znz": Phase="Running", Reason="", readiness=true. Elapsed: 4.647652ms Apr 11 16:57:24.567: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-72znz" satisfied condition "running" Apr 11 16:57:24.567: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-7mtxp" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:57:24.572: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-7mtxp": Phase="Running", Reason="", readiness=true. Elapsed: 4.317163ms Apr 11 16:57:24.572: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-7mtxp" satisfied condition "running" Apr 11 16:57:24.572: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-jjk2m" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:57:24.581: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-jjk2m": Phase="Running", Reason="", readiness=true. Elapsed: 9.003929ms Apr 11 16:57:24.581: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-jjk2m" satisfied condition "running" Apr 11 16:57:24.581: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-zqqbb" in namespace "emptydir-wrapper-9735" to be "running" Apr 11 16:57:24.585: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-zqqbb": Phase="Running", Reason="", readiness=true. Elapsed: 4.163264ms Apr 11 16:57:24.585: INFO: Pod "wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f-zqqbb" satisfied condition "running" STEP: deleting ReplicationController wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f in namespace emptydir-wrapper-9735, will wait for the garbage collector to delete the pods 04/11/24 16:57:24.585 Apr 11 16:57:24.647: INFO: Deleting ReplicationController wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f took: 6.127649ms Apr 11 16:57:24.747: INFO: Terminating ReplicationController wrapped-volume-race-89014cd7-9437-4d2a-815f-49e02d74ab8f pods took: 100.363347ms STEP: Cleaning up the configMaps 04/11/24 16:57:27.348 [AfterEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/node/init/init.go:32 Apr 11 16:57:27.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes tear down framework | framework.go:193 STEP: Destroying namespace "emptydir-wrapper-9735" for this suite. 04/11/24 16:57:27.593 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] test/e2e/apimachinery/namespace.go:268 [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:57:27.616 Apr 11 16:57:27.616: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/11/24 16:57:27.618 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:57:27.63 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:57:27.634 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should patch a Namespace [Conformance] test/e2e/apimachinery/namespace.go:268 STEP: creating a Namespace 04/11/24 16:57:27.639 STEP: patching the Namespace 04/11/24 16:57:27.65 STEP: get the Namespace and ensuring it has the label 04/11/24 16:57:27.654 [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:57:27.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-1085" for this suite. 04/11/24 16:57:27.663 STEP: Destroying namespace "nspatchtest-790bb652-1d37-4042-a410-c4f48211f231-4608" for this suite. 04/11/24 16:57:27.668 ------------------------------ • [0.057 seconds] [sig-api-machinery] Namespaces [Serial] test/e2e/apimachinery/framework.go:23 should patch a Namespace [Conformance] test/e2e/apimachinery/namespace.go:268 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:57:27.616 Apr 11 16:57:27.616: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/11/24 16:57:27.618 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:57:27.63 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:57:27.634 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should patch a Namespace [Conformance] test/e2e/apimachinery/namespace.go:268 STEP: creating a Namespace 04/11/24 16:57:27.639 STEP: patching the Namespace 04/11/24 16:57:27.65 STEP: get the Namespace and ensuring it has the label 04/11/24 16:57:27.654 [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:57:27.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-1085" for this suite. 04/11/24 16:57:27.663 STEP: Destroying namespace "nspatchtest-790bb652-1d37-4042-a410-c4f48211f231-4608" for this suite. 04/11/24 16:57:27.668 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:243 [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:57:27.686 Apr 11 16:57:27.686: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/11/24 16:57:27.688 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:57:27.699 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:57:27.703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should ensure that all pods are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:243 STEP: Creating a test namespace 04/11/24 16:57:27.707 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:57:27.718 STEP: Creating a pod in the namespace 04/11/24 16:57:27.722 STEP: Waiting for the pod to have running status 04/11/24 16:57:27.729 Apr 11 16:57:27.729: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "nsdeletetest-4991" to be "running" Apr 11 16:57:27.732: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.850066ms Apr 11 16:57:29.736: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007073322s Apr 11 16:57:31.737: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.007606338s Apr 11 16:57:31.737: INFO: Pod "test-pod" satisfied condition "running" STEP: Deleting the namespace 04/11/24 16:57:31.737 STEP: Waiting for the namespace to be removed. 04/11/24 16:57:31.742 STEP: Recreating the namespace 04/11/24 16:57:42.746 STEP: Verifying there are no pods in the namespace 04/11/24 16:57:42.758 [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:57:42.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-3285" for this suite. 04/11/24 16:57:42.767 STEP: Destroying namespace "nsdeletetest-4991" for this suite. 04/11/24 16:57:42.772 Apr 11 16:57:42.775: INFO: Namespace nsdeletetest-4991 was already deleted STEP: Destroying namespace "nsdeletetest-7556" for this suite. 04/11/24 16:57:42.775 ------------------------------ • [SLOW TEST] [15.095 seconds] [sig-api-machinery] Namespaces [Serial] test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:243 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:57:27.686 Apr 11 16:57:27.686: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/11/24 16:57:27.688 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:57:27.699 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:57:27.703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should ensure that all pods are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:243 STEP: Creating a test namespace 04/11/24 16:57:27.707 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:57:27.718 STEP: Creating a pod in the namespace 04/11/24 16:57:27.722 STEP: Waiting for the pod to have running status 04/11/24 16:57:27.729 Apr 11 16:57:27.729: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "nsdeletetest-4991" to be "running" Apr 11 16:57:27.732: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.850066ms Apr 11 16:57:29.736: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007073322s Apr 11 16:57:31.737: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.007606338s Apr 11 16:57:31.737: INFO: Pod "test-pod" satisfied condition "running" STEP: Deleting the namespace 04/11/24 16:57:31.737 STEP: Waiting for the namespace to be removed. 04/11/24 16:57:31.742 STEP: Recreating the namespace 04/11/24 16:57:42.746 STEP: Verifying there are no pods in the namespace 04/11/24 16:57:42.758 [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:57:42.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-3285" for this suite. 04/11/24 16:57:42.767 STEP: Destroying namespace "nsdeletetest-4991" for this suite. 04/11/24 16:57:42.772 Apr 11 16:57:42.775: INFO: Namespace nsdeletetest-4991 was already deleted STEP: Destroying namespace "nsdeletetest-7556" for this suite. 04/11/24 16:57:42.775 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] test/e2e/scheduling/preemption.go:130 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:57:42.835 Apr 11 16:57:42.835: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/11/24 16:57:42.837 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:57:42.848 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:57:42.852 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 11 16:57:42.870: INFO: Waiting up to 1m0s for all nodes to be ready Apr 11 16:58:42.896: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] test/e2e/scheduling/preemption.go:130 STEP: Create pods that use 4/5 of node resources. 04/11/24 16:58:42.9 Apr 11 16:58:42.927: INFO: Created pod: pod0-0-sched-preemption-low-priority Apr 11 16:58:42.933: INFO: Created pod: pod0-1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. 04/11/24 16:58:42.933 Apr 11 16:58:42.933: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-9246" to be "running" Apr 11 16:58:42.936: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 3.157177ms Apr 11 16:58:44.940: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.007270526s Apr 11 16:58:44.940: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Apr 11 16:58:44.940: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-9246" to be "running" Apr 11 16:58:44.943: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 3.226076ms Apr 11 16:58:44.943: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" STEP: Run a high priority pod that has same requirements as that of lower priority pod 04/11/24 16:58:44.943 Apr 11 16:58:44.950: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-9246" to be "running" Apr 11 16:58:44.953: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.259484ms Apr 11 16:58:46.958: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008245996s Apr 11 16:58:48.959: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008805379s Apr 11 16:58:50.958: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.008669655s Apr 11 16:58:50.959: INFO: Pod "preemptor-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:58:50.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-9246" for this suite. 04/11/24 16:58:51.006 ------------------------------ • [SLOW TEST] [68.176 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] test/e2e/scheduling/preemption.go:130 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:57:42.835 Apr 11 16:57:42.835: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/11/24 16:57:42.837 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:57:42.848 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:57:42.852 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 11 16:57:42.870: INFO: Waiting up to 1m0s for all nodes to be ready Apr 11 16:58:42.896: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] test/e2e/scheduling/preemption.go:130 STEP: Create pods that use 4/5 of node resources. 04/11/24 16:58:42.9 Apr 11 16:58:42.927: INFO: Created pod: pod0-0-sched-preemption-low-priority Apr 11 16:58:42.933: INFO: Created pod: pod0-1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. 04/11/24 16:58:42.933 Apr 11 16:58:42.933: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-9246" to be "running" Apr 11 16:58:42.936: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 3.157177ms Apr 11 16:58:44.940: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.007270526s Apr 11 16:58:44.940: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Apr 11 16:58:44.940: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-9246" to be "running" Apr 11 16:58:44.943: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 3.226076ms Apr 11 16:58:44.943: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" STEP: Run a high priority pod that has same requirements as that of lower priority pod 04/11/24 16:58:44.943 Apr 11 16:58:44.950: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-9246" to be "running" Apr 11 16:58:44.953: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.259484ms Apr 11 16:58:46.958: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008245996s Apr 11 16:58:48.959: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008805379s Apr 11 16:58:50.958: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.008669655s Apr 11 16:58:50.959: INFO: Pod "preemptor-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:58:50.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-9246" for this suite. 04/11/24 16:58:51.006 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] test/e2e/apps/daemon_set.go:177 [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:58:51.024 Apr 11 16:58:51.024: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/11/24 16:58:51.025 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:58:51.036 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:58:51.04 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should run and stop simple daemon [Conformance] test/e2e/apps/daemon_set.go:177 STEP: Creating simple DaemonSet "daemon-set" 04/11/24 16:58:51.056 STEP: Check that daemon pods launch on every node of the cluster. 04/11/24 16:58:51.062 Apr 11 16:58:51.067: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:58:51.070: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:58:51.070: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:58:52.076: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:58:52.080: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:58:52.080: INFO: Node v126-worker2 is running 0 daemon pod, expected 1 Apr 11 16:58:53.075: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:58:53.079: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 11 16:58:53.079: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Stop a daemon pod, check that the daemon pod is revived. 04/11/24 16:58:53.083 Apr 11 16:58:53.097: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:58:53.100: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:58:53.100: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:58:54.105: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:58:54.108: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:58:54.108: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:58:55.106: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:58:55.110: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:58:55.110: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:58:56.106: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:58:56.110: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 11 16:58:56.110: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/11/24 16:58:56.114 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1271, will wait for the garbage collector to delete the pods 04/11/24 16:58:56.114 Apr 11 16:58:56.173: INFO: Deleting DaemonSet.extensions daemon-set took: 4.796816ms Apr 11 16:58:56.273: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.553368ms Apr 11 16:58:59.177: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:58:59.177: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 11 16:58:59.180: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"7512061"},"items":null} Apr 11 16:58:59.183: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"7512061"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:58:59.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-1271" for this suite. 04/11/24 16:58:59.197 ------------------------------ • [SLOW TEST] [8.178 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] test/e2e/apps/daemon_set.go:177 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:58:51.024 Apr 11 16:58:51.024: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/11/24 16:58:51.025 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:58:51.036 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:58:51.04 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should run and stop simple daemon [Conformance] test/e2e/apps/daemon_set.go:177 STEP: Creating simple DaemonSet "daemon-set" 04/11/24 16:58:51.056 STEP: Check that daemon pods launch on every node of the cluster. 04/11/24 16:58:51.062 Apr 11 16:58:51.067: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:58:51.070: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:58:51.070: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:58:52.076: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:58:52.080: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:58:52.080: INFO: Node v126-worker2 is running 0 daemon pod, expected 1 Apr 11 16:58:53.075: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:58:53.079: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 11 16:58:53.079: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Stop a daemon pod, check that the daemon pod is revived. 04/11/24 16:58:53.083 Apr 11 16:58:53.097: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:58:53.100: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:58:53.100: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:58:54.105: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:58:54.108: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:58:54.108: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:58:55.106: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:58:55.110: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:58:55.110: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:58:56.106: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:58:56.110: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 11 16:58:56.110: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/11/24 16:58:56.114 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1271, will wait for the garbage collector to delete the pods 04/11/24 16:58:56.114 Apr 11 16:58:56.173: INFO: Deleting DaemonSet.extensions daemon-set took: 4.796816ms Apr 11 16:58:56.273: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.553368ms Apr 11 16:58:59.177: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:58:59.177: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 11 16:58:59.180: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"7512061"},"items":null} Apr 11 16:58:59.183: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"7512061"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:58:59.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-1271" for this suite. 04/11/24 16:58:59.197 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:305 [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:58:59.222 Apr 11 16:58:59.222: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/11/24 16:58:59.224 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:58:59.235 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:58:59.239 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:305 STEP: Creating a simple DaemonSet "daemon-set" 04/11/24 16:58:59.255 STEP: Check that daemon pods launch on every node of the cluster. 04/11/24 16:58:59.26 Apr 11 16:58:59.265: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:58:59.268: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:58:59.268: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:59:00.274: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:59:00.277: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:59:00.277: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:59:01.274: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:59:01.278: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 11 16:59:01.278: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 04/11/24 16:59:01.281 Apr 11 16:59:01.298: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:59:01.302: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:59:01.302: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:59:02.309: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:59:02.313: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:59:02.313: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:59:03.308: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:59:03.312: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 11 16:59:03.312: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Wait for the failed daemon pod to be completely deleted. 04/11/24 16:59:03.312 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/11/24 16:59:03.318 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3036, will wait for the garbage collector to delete the pods 04/11/24 16:59:03.319 Apr 11 16:59:03.378: INFO: Deleting DaemonSet.extensions daemon-set took: 5.151346ms Apr 11 16:59:03.478: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.638523ms Apr 11 16:59:06.182: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:59:06.182: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 11 16:59:06.185: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"7512152"},"items":null} Apr 11 16:59:06.188: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"7512152"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:59:06.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-3036" for this suite. 04/11/24 16:59:06.202 ------------------------------ • [SLOW TEST] [6.985 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:305 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 16:58:59.222 Apr 11 16:58:59.222: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/11/24 16:58:59.224 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 16:58:59.235 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 16:58:59.239 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:305 STEP: Creating a simple DaemonSet "daemon-set" 04/11/24 16:58:59.255 STEP: Check that daemon pods launch on every node of the cluster. 04/11/24 16:58:59.26 Apr 11 16:58:59.265: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:58:59.268: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:58:59.268: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:59:00.274: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:59:00.277: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:59:00.277: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:59:01.274: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:59:01.278: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 11 16:59:01.278: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 04/11/24 16:59:01.281 Apr 11 16:59:01.298: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:59:01.302: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:59:01.302: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:59:02.309: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:59:02.313: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 11 16:59:02.313: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 11 16:59:03.308: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 11 16:59:03.312: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 11 16:59:03.312: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Wait for the failed daemon pod to be completely deleted. 04/11/24 16:59:03.312 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/11/24 16:59:03.318 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3036, will wait for the garbage collector to delete the pods 04/11/24 16:59:03.319 Apr 11 16:59:03.378: INFO: Deleting DaemonSet.extensions daemon-set took: 5.151346ms Apr 11 16:59:03.478: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.638523ms Apr 11 16:59:06.182: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 11 16:59:06.182: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 11 16:59:06.185: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"7512152"},"items":null} Apr 11 16:59:06.188: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"7512152"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 16:59:06.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-3036" for this suite. 04/11/24 16:59:06.202 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [SynchronizedAfterSuite] test/e2e/e2e.go:88 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 Apr 11 16:59:06.284: INFO: Running AfterSuite actions on node 1 Apr 11 16:59:06.284: INFO: Skipping dumping logs from cluster ------------------------------ [SynchronizedAfterSuite] PASSED [0.000 seconds] [SynchronizedAfterSuite] test/e2e/e2e.go:88 Begin Captured GinkgoWriter Output >> [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 Apr 11 16:59:06.284: INFO: Running AfterSuite actions on node 1 Apr 11 16:59:06.284: INFO: Skipping dumping logs from cluster << End Captured GinkgoWriter Output ------------------------------ [ReportAfterSuite] Kubernetes e2e suite report test/e2e/e2e_test.go:153 [ReportAfterSuite] TOP-LEVEL test/e2e/e2e_test.go:153 ------------------------------ [ReportAfterSuite] PASSED [0.000 seconds] [ReportAfterSuite] Kubernetes e2e suite report test/e2e/e2e_test.go:153 Begin Captured GinkgoWriter Output >> [ReportAfterSuite] TOP-LEVEL test/e2e/e2e_test.go:153 << End Captured GinkgoWriter Output ------------------------------ [ReportAfterSuite] Kubernetes e2e JUnit report test/e2e/framework/test_context.go:529 [ReportAfterSuite] TOP-LEVEL test/e2e/framework/test_context.go:529 ------------------------------ [ReportAfterSuite] PASSED [0.254 seconds] [ReportAfterSuite] Kubernetes e2e JUnit report test/e2e/framework/test_context.go:529 Begin Captured GinkgoWriter Output >> [ReportAfterSuite] TOP-LEVEL test/e2e/framework/test_context.go:529 << End Captured GinkgoWriter Output ------------------------------ Summarizing 1 Failure: [FAIL] [sig-apps] Daemon set [Serial] [It] should rollback without unnecessary restarts [Conformance] test/e2e/apps/daemon_set.go:446 Ran 23 of 7069 Specs in 719.516 seconds FAIL! -- 22 Passed | 1 Failed | 0 Pending | 7046 Skipped --- FAIL: TestE2E (719.99s) FAIL Ginkgo ran 1 suite in 12m0.132892441s Test Suite Failed