I0418 17:56:51.871241 16 e2e.go:126] Starting e2e run "381651fe-6506-426c-b50b-140b37f12677" on Ginkgo node 1 Apr 18 17:56:51.887: INFO: Enabling in-tree volume drivers Running Suite: Kubernetes e2e suite - /usr/local/bin ==================================================== Random Seed: 1713463011 - will randomize all specs Will run 15 of 7069 specs ------------------------------ [SynchronizedBeforeSuite] test/e2e/e2e.go:77 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 18 17:56:52.059: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 17:56:52.062: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 18 17:56:52.091: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 18 17:56:52.125: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 18 17:56:52.125: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 18 17:56:52.125: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 18 17:56:52.132: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Apr 18 17:56:52.132: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 18 17:56:52.132: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 18 17:56:52.132: INFO: e2e test version: v1.26.13 Apr 18 17:56:52.133: INFO: kube-apiserver version: v1.26.6 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 18 17:56:52.134: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 17:56:52.139: INFO: Cluster IP family: ipv4 ------------------------------ [SynchronizedBeforeSuite] PASSED [0.081 seconds] [SynchronizedBeforeSuite] test/e2e/e2e.go:77 Begin Captured GinkgoWriter Output >> [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 18 17:56:52.059: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 17:56:52.062: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 18 17:56:52.091: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 18 17:56:52.125: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 18 17:56:52.125: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 18 17:56:52.125: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 18 17:56:52.132: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Apr 18 17:56:52.132: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 18 17:56:52.132: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 18 17:56:52.132: INFO: e2e test version: v1.26.13 Apr 18 17:56:52.133: INFO: kube-apiserver version: v1.26.6 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 18 17:56:52.134: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 17:56:52.139: INFO: Cluster IP family: ipv4 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] test/e2e/scheduling/predicates.go:127 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 17:56:52.199 Apr 18 17:56:52.199: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 17:56:52.2 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 17:56:52.215 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 17:56:52.219 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 17:56:52.224: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 17:56:52.232: INFO: Waiting for terminating namespaces to be deleted... Apr 18 17:56:52.236: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 17:56:52.242: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 17:56:52.242: INFO: Container loopdev ready: true, restart count 0 Apr 18 17:56:52.242: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 17:56:52.242: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 17:56:52.242: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 17:56:52.242: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 17:56:52.242: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 17:56:52.248: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 17:56:52.248: INFO: Container loopdev ready: true, restart count 0 Apr 18 17:56:52.248: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 17:56:52.248: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 17:56:52.248: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 17:56:52.248: INFO: Container kube-proxy ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] test/e2e/scheduling/predicates.go:127 Apr 18 17:56:52.266: INFO: Pod create-loop-devs-w9ldx requesting local ephemeral resource =0 on Node v126-worker Apr 18 17:56:52.266: INFO: Pod create-loop-devs-xnxkn requesting local ephemeral resource =0 on Node v126-worker2 Apr 18 17:56:52.266: INFO: Pod kindnet-68nxx requesting local ephemeral resource =0 on Node v126-worker Apr 18 17:56:52.266: INFO: Pod kindnet-wqc6h requesting local ephemeral resource =0 on Node v126-worker2 Apr 18 17:56:52.266: INFO: Pod kube-proxy-4wtz6 requesting local ephemeral resource =0 on Node v126-worker Apr 18 17:56:52.266: INFO: Pod kube-proxy-hjqqd requesting local ephemeral resource =0 on Node v126-worker2 Apr 18 17:56:52.266: INFO: Using pod capacity: 47055905587 Apr 18 17:56:52.266: INFO: Node: v126-worker has local ephemeral resource allocatable: 470559055872 Apr 18 17:56:52.266: INFO: Node: v126-worker2 has local ephemeral resource allocatable: 470559055872 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one 04/18/24 17:56:52.266 Apr 18 17:56:52.370: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.17c7718806bacd74], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-0 to v126-worker2] 04/18/24 17:57:02.43 STEP: Considering event: Type = [Normal], Name = [overcommit-0.17c771883d5df965], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.43 STEP: Considering event: Type = [Normal], Name = [overcommit-0.17c771883e4941cc], Reason = [Created], Message = [Created container overcommit-0] 04/18/24 17:57:02.431 STEP: Considering event: Type = [Normal], Name = [overcommit-0.17c771884e1bfd85], Reason = [Started], Message = [Started container overcommit-0] 04/18/24 17:57:02.431 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17c77188070ce4c1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-1 to v126-worker2] 04/18/24 17:57:02.431 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17c77188a5c0cc83], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.431 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17c77188a6c38dfd], Reason = [Created], Message = [Created container overcommit-1] 04/18/24 17:57:02.431 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17c77188b2d2a012], Reason = [Started], Message = [Started container overcommit-1] 04/18/24 17:57:02.431 STEP: Considering event: Type = [Normal], Name = [overcommit-10.17c7718809d1c5df], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-10 to v126-worker] 04/18/24 17:57:02.431 STEP: Considering event: Type = [Normal], Name = [overcommit-10.17c77188863b5fcd], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.431 STEP: Considering event: Type = [Normal], Name = [overcommit-10.17c7718886fbb0b9], Reason = [Created], Message = [Created container overcommit-10] 04/18/24 17:57:02.431 STEP: Considering event: Type = [Normal], Name = [overcommit-10.17c771889248736b], Reason = [Started], Message = [Started container overcommit-10] 04/18/24 17:57:02.432 STEP: Considering event: Type = [Normal], Name = [overcommit-11.17c771880a304916], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-11 to v126-worker2] 04/18/24 17:57:02.432 STEP: Considering event: Type = [Normal], Name = [overcommit-11.17c771885ffa43a2], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.432 STEP: Considering event: Type = [Normal], Name = [overcommit-11.17c7718860a23996], Reason = [Created], Message = [Created container overcommit-11] 04/18/24 17:57:02.432 STEP: Considering event: Type = [Normal], Name = [overcommit-11.17c771886e30756a], Reason = [Started], Message = [Started container overcommit-11] 04/18/24 17:57:02.432 STEP: Considering event: Type = [Normal], Name = [overcommit-12.17c771880a76f735], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-12 to v126-worker2] 04/18/24 17:57:02.432 STEP: Considering event: Type = [Normal], Name = [overcommit-12.17c77188505eff56], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.432 STEP: Considering event: Type = [Normal], Name = [overcommit-12.17c771885130d2aa], Reason = [Created], Message = [Created container overcommit-12] 04/18/24 17:57:02.432 STEP: Considering event: Type = [Normal], Name = [overcommit-12.17c771885e49c3a7], Reason = [Started], Message = [Started container overcommit-12] 04/18/24 17:57:02.432 STEP: Considering event: Type = [Normal], Name = [overcommit-13.17c771880ab425dc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-13 to v126-worker] 04/18/24 17:57:02.433 STEP: Considering event: Type = [Normal], Name = [overcommit-13.17c771886518969f], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.433 STEP: Considering event: Type = [Normal], Name = [overcommit-13.17c7718865dd8b97], Reason = [Created], Message = [Created container overcommit-13] 04/18/24 17:57:02.433 STEP: Considering event: Type = [Normal], Name = [overcommit-13.17c7718872228421], Reason = [Started], Message = [Started container overcommit-13] 04/18/24 17:57:02.433 STEP: Considering event: Type = [Normal], Name = [overcommit-14.17c771880af841a0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-14 to v126-worker] 04/18/24 17:57:02.433 STEP: Considering event: Type = [Normal], Name = [overcommit-14.17c771885069aa46], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.433 STEP: Considering event: Type = [Normal], Name = [overcommit-14.17c771885173e5fc], Reason = [Created], Message = [Created container overcommit-14] 04/18/24 17:57:02.433 STEP: Considering event: Type = [Normal], Name = [overcommit-14.17c771885f7be268], Reason = [Started], Message = [Started container overcommit-14] 04/18/24 17:57:02.433 STEP: Considering event: Type = [Normal], Name = [overcommit-15.17c771880b4c3958], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-15 to v126-worker] 04/18/24 17:57:02.433 STEP: Considering event: Type = [Normal], Name = [overcommit-15.17c7718872d93240], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.433 STEP: Considering event: Type = [Normal], Name = [overcommit-15.17c77188737b3fc1], Reason = [Created], Message = [Created container overcommit-15] 04/18/24 17:57:02.434 STEP: Considering event: Type = [Normal], Name = [overcommit-15.17c7718880d55f75], Reason = [Started], Message = [Started container overcommit-15] 04/18/24 17:57:02.434 STEP: Considering event: Type = [Normal], Name = [overcommit-16.17c771880b998b67], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-16 to v126-worker] 04/18/24 17:57:02.434 STEP: Considering event: Type = [Normal], Name = [overcommit-16.17c771889639d790], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.434 STEP: Considering event: Type = [Normal], Name = [overcommit-16.17c7718896e9a2fa], Reason = [Created], Message = [Created container overcommit-16] 04/18/24 17:57:02.434 STEP: Considering event: Type = [Normal], Name = [overcommit-16.17c77188a230b560], Reason = [Started], Message = [Started container overcommit-16] 04/18/24 17:57:02.434 STEP: Considering event: Type = [Normal], Name = [overcommit-17.17c771880bda403e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-17 to v126-worker2] 04/18/24 17:57:02.434 STEP: Considering event: Type = [Normal], Name = [overcommit-17.17c77188949f96b4], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.434 STEP: Considering event: Type = [Normal], Name = [overcommit-17.17c7718895714f40], Reason = [Created], Message = [Created container overcommit-17] 04/18/24 17:57:02.434 STEP: Considering event: Type = [Normal], Name = [overcommit-17.17c771889e53998a], Reason = [Started], Message = [Started container overcommit-17] 04/18/24 17:57:02.434 STEP: Considering event: Type = [Normal], Name = [overcommit-18.17c771880c18e463], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-18 to v126-worker2] 04/18/24 17:57:02.435 STEP: Considering event: Type = [Normal], Name = [overcommit-18.17c77188519590ec], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.435 STEP: Considering event: Type = [Normal], Name = [overcommit-18.17c77188524441af], Reason = [Created], Message = [Created container overcommit-18] 04/18/24 17:57:02.435 STEP: Considering event: Type = [Normal], Name = [overcommit-18.17c771885fec1989], Reason = [Started], Message = [Started container overcommit-18] 04/18/24 17:57:02.435 STEP: Considering event: Type = [Normal], Name = [overcommit-19.17c771880c64c2ba], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-19 to v126-worker2] 04/18/24 17:57:02.435 STEP: Considering event: Type = [Normal], Name = [overcommit-19.17c7718885a7a7ff], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.435 STEP: Considering event: Type = [Normal], Name = [overcommit-19.17c7718886665d04], Reason = [Created], Message = [Created container overcommit-19] 04/18/24 17:57:02.435 STEP: Considering event: Type = [Normal], Name = [overcommit-19.17c771889244a196], Reason = [Started], Message = [Started container overcommit-19] 04/18/24 17:57:02.435 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17c7718807546632], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-2 to v126-worker] 04/18/24 17:57:02.435 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17c771883d6fb995], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.435 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17c771883e60c5dd], Reason = [Created], Message = [Created container overcommit-2] 04/18/24 17:57:02.436 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17c771884e18d2e3], Reason = [Started], Message = [Started container overcommit-2] 04/18/24 17:57:02.436 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17c7718807973fee], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-3 to v126-worker2] 04/18/24 17:57:02.436 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17c77188861c3612], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.436 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17c7718886e0f78f], Reason = [Created], Message = [Created container overcommit-3] 04/18/24 17:57:02.436 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17c7718892c820a3], Reason = [Started], Message = [Started container overcommit-3] 04/18/24 17:57:02.436 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17c7718807e2e07a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-4 to v126-worker2] 04/18/24 17:57:02.436 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17c77188931ba3f5], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.436 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17c7718893b7cce5], Reason = [Created], Message = [Created container overcommit-4] 04/18/24 17:57:02.436 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17c771889e5375ba], Reason = [Started], Message = [Started container overcommit-4] 04/18/24 17:57:02.437 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17c77188083a568c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-5 to v126-worker] 04/18/24 17:57:02.437 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17c7718892d61034], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.437 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17c77188937fb64c], Reason = [Created], Message = [Created container overcommit-5] 04/18/24 17:57:02.437 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17c771889daa86dd], Reason = [Started], Message = [Started container overcommit-5] 04/18/24 17:57:02.437 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17c7718808898ba6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-6 to v126-worker] 04/18/24 17:57:02.437 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17c7718850697bcc], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.437 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17c77188515d412d], Reason = [Created], Message = [Created container overcommit-6] 04/18/24 17:57:02.437 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17c771885d5da466], Reason = [Started], Message = [Started container overcommit-6] 04/18/24 17:57:02.437 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17c7718808ce49a0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-7 to v126-worker] 04/18/24 17:57:02.437 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17c77188a5e915b4], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.438 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17c77188a6a30f6e], Reason = [Created], Message = [Created container overcommit-7] 04/18/24 17:57:02.438 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17c77188b287c160], Reason = [Started], Message = [Started container overcommit-7] 04/18/24 17:57:02.438 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17c771880923569c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-8 to v126-worker2] 04/18/24 17:57:02.438 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17c7718871da9570], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.438 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17c7718872a2971a], Reason = [Created], Message = [Created container overcommit-8] 04/18/24 17:57:02.438 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17c7718882036e46], Reason = [Started], Message = [Started container overcommit-8] 04/18/24 17:57:02.438 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17c77188098fb9c1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-9 to v126-worker] 04/18/24 17:57:02.438 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17c77188737bd132], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.438 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17c77188742ba87e], Reason = [Created], Message = [Created container overcommit-9] 04/18/24 17:57:02.438 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17c7718881ee42d5], Reason = [Started], Message = [Started container overcommit-9] 04/18/24 17:57:02.439 STEP: Considering event: Type = [Warning], Name = [additional-pod.17c7718a643f71d0], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient ephemeral-storage. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod..] 04/18/24 17:57:02.443 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 17:57:03.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-6873" for this suite. 04/18/24 17:57:03.457 ------------------------------ • [SLOW TEST] [11.263 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] test/e2e/scheduling/predicates.go:127 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 17:56:52.199 Apr 18 17:56:52.199: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 17:56:52.2 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 17:56:52.215 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 17:56:52.219 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 17:56:52.224: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 17:56:52.232: INFO: Waiting for terminating namespaces to be deleted... Apr 18 17:56:52.236: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 17:56:52.242: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 17:56:52.242: INFO: Container loopdev ready: true, restart count 0 Apr 18 17:56:52.242: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 17:56:52.242: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 17:56:52.242: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 17:56:52.242: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 17:56:52.242: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 17:56:52.248: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 17:56:52.248: INFO: Container loopdev ready: true, restart count 0 Apr 18 17:56:52.248: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 17:56:52.248: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 17:56:52.248: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 17:56:52.248: INFO: Container kube-proxy ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] test/e2e/scheduling/predicates.go:127 Apr 18 17:56:52.266: INFO: Pod create-loop-devs-w9ldx requesting local ephemeral resource =0 on Node v126-worker Apr 18 17:56:52.266: INFO: Pod create-loop-devs-xnxkn requesting local ephemeral resource =0 on Node v126-worker2 Apr 18 17:56:52.266: INFO: Pod kindnet-68nxx requesting local ephemeral resource =0 on Node v126-worker Apr 18 17:56:52.266: INFO: Pod kindnet-wqc6h requesting local ephemeral resource =0 on Node v126-worker2 Apr 18 17:56:52.266: INFO: Pod kube-proxy-4wtz6 requesting local ephemeral resource =0 on Node v126-worker Apr 18 17:56:52.266: INFO: Pod kube-proxy-hjqqd requesting local ephemeral resource =0 on Node v126-worker2 Apr 18 17:56:52.266: INFO: Using pod capacity: 47055905587 Apr 18 17:56:52.266: INFO: Node: v126-worker has local ephemeral resource allocatable: 470559055872 Apr 18 17:56:52.266: INFO: Node: v126-worker2 has local ephemeral resource allocatable: 470559055872 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one 04/18/24 17:56:52.266 Apr 18 17:56:52.370: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.17c7718806bacd74], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-0 to v126-worker2] 04/18/24 17:57:02.43 STEP: Considering event: Type = [Normal], Name = [overcommit-0.17c771883d5df965], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.43 STEP: Considering event: Type = [Normal], Name = [overcommit-0.17c771883e4941cc], Reason = [Created], Message = [Created container overcommit-0] 04/18/24 17:57:02.431 STEP: Considering event: Type = [Normal], Name = [overcommit-0.17c771884e1bfd85], Reason = [Started], Message = [Started container overcommit-0] 04/18/24 17:57:02.431 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17c77188070ce4c1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-1 to v126-worker2] 04/18/24 17:57:02.431 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17c77188a5c0cc83], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.431 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17c77188a6c38dfd], Reason = [Created], Message = [Created container overcommit-1] 04/18/24 17:57:02.431 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17c77188b2d2a012], Reason = [Started], Message = [Started container overcommit-1] 04/18/24 17:57:02.431 STEP: Considering event: Type = [Normal], Name = [overcommit-10.17c7718809d1c5df], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-10 to v126-worker] 04/18/24 17:57:02.431 STEP: Considering event: Type = [Normal], Name = [overcommit-10.17c77188863b5fcd], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.431 STEP: Considering event: Type = [Normal], Name = [overcommit-10.17c7718886fbb0b9], Reason = [Created], Message = [Created container overcommit-10] 04/18/24 17:57:02.431 STEP: Considering event: Type = [Normal], Name = [overcommit-10.17c771889248736b], Reason = [Started], Message = [Started container overcommit-10] 04/18/24 17:57:02.432 STEP: Considering event: Type = [Normal], Name = [overcommit-11.17c771880a304916], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-11 to v126-worker2] 04/18/24 17:57:02.432 STEP: Considering event: Type = [Normal], Name = [overcommit-11.17c771885ffa43a2], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.432 STEP: Considering event: Type = [Normal], Name = [overcommit-11.17c7718860a23996], Reason = [Created], Message = [Created container overcommit-11] 04/18/24 17:57:02.432 STEP: Considering event: Type = [Normal], Name = [overcommit-11.17c771886e30756a], Reason = [Started], Message = [Started container overcommit-11] 04/18/24 17:57:02.432 STEP: Considering event: Type = [Normal], Name = [overcommit-12.17c771880a76f735], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-12 to v126-worker2] 04/18/24 17:57:02.432 STEP: Considering event: Type = [Normal], Name = [overcommit-12.17c77188505eff56], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.432 STEP: Considering event: Type = [Normal], Name = [overcommit-12.17c771885130d2aa], Reason = [Created], Message = [Created container overcommit-12] 04/18/24 17:57:02.432 STEP: Considering event: Type = [Normal], Name = [overcommit-12.17c771885e49c3a7], Reason = [Started], Message = [Started container overcommit-12] 04/18/24 17:57:02.432 STEP: Considering event: Type = [Normal], Name = [overcommit-13.17c771880ab425dc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-13 to v126-worker] 04/18/24 17:57:02.433 STEP: Considering event: Type = [Normal], Name = [overcommit-13.17c771886518969f], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.433 STEP: Considering event: Type = [Normal], Name = [overcommit-13.17c7718865dd8b97], Reason = [Created], Message = [Created container overcommit-13] 04/18/24 17:57:02.433 STEP: Considering event: Type = [Normal], Name = [overcommit-13.17c7718872228421], Reason = [Started], Message = [Started container overcommit-13] 04/18/24 17:57:02.433 STEP: Considering event: Type = [Normal], Name = [overcommit-14.17c771880af841a0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-14 to v126-worker] 04/18/24 17:57:02.433 STEP: Considering event: Type = [Normal], Name = [overcommit-14.17c771885069aa46], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.433 STEP: Considering event: Type = [Normal], Name = [overcommit-14.17c771885173e5fc], Reason = [Created], Message = [Created container overcommit-14] 04/18/24 17:57:02.433 STEP: Considering event: Type = [Normal], Name = [overcommit-14.17c771885f7be268], Reason = [Started], Message = [Started container overcommit-14] 04/18/24 17:57:02.433 STEP: Considering event: Type = [Normal], Name = [overcommit-15.17c771880b4c3958], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-15 to v126-worker] 04/18/24 17:57:02.433 STEP: Considering event: Type = [Normal], Name = [overcommit-15.17c7718872d93240], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.433 STEP: Considering event: Type = [Normal], Name = [overcommit-15.17c77188737b3fc1], Reason = [Created], Message = [Created container overcommit-15] 04/18/24 17:57:02.434 STEP: Considering event: Type = [Normal], Name = [overcommit-15.17c7718880d55f75], Reason = [Started], Message = [Started container overcommit-15] 04/18/24 17:57:02.434 STEP: Considering event: Type = [Normal], Name = [overcommit-16.17c771880b998b67], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-16 to v126-worker] 04/18/24 17:57:02.434 STEP: Considering event: Type = [Normal], Name = [overcommit-16.17c771889639d790], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.434 STEP: Considering event: Type = [Normal], Name = [overcommit-16.17c7718896e9a2fa], Reason = [Created], Message = [Created container overcommit-16] 04/18/24 17:57:02.434 STEP: Considering event: Type = [Normal], Name = [overcommit-16.17c77188a230b560], Reason = [Started], Message = [Started container overcommit-16] 04/18/24 17:57:02.434 STEP: Considering event: Type = [Normal], Name = [overcommit-17.17c771880bda403e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-17 to v126-worker2] 04/18/24 17:57:02.434 STEP: Considering event: Type = [Normal], Name = [overcommit-17.17c77188949f96b4], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.434 STEP: Considering event: Type = [Normal], Name = [overcommit-17.17c7718895714f40], Reason = [Created], Message = [Created container overcommit-17] 04/18/24 17:57:02.434 STEP: Considering event: Type = [Normal], Name = [overcommit-17.17c771889e53998a], Reason = [Started], Message = [Started container overcommit-17] 04/18/24 17:57:02.434 STEP: Considering event: Type = [Normal], Name = [overcommit-18.17c771880c18e463], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-18 to v126-worker2] 04/18/24 17:57:02.435 STEP: Considering event: Type = [Normal], Name = [overcommit-18.17c77188519590ec], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.435 STEP: Considering event: Type = [Normal], Name = [overcommit-18.17c77188524441af], Reason = [Created], Message = [Created container overcommit-18] 04/18/24 17:57:02.435 STEP: Considering event: Type = [Normal], Name = [overcommit-18.17c771885fec1989], Reason = [Started], Message = [Started container overcommit-18] 04/18/24 17:57:02.435 STEP: Considering event: Type = [Normal], Name = [overcommit-19.17c771880c64c2ba], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-19 to v126-worker2] 04/18/24 17:57:02.435 STEP: Considering event: Type = [Normal], Name = [overcommit-19.17c7718885a7a7ff], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.435 STEP: Considering event: Type = [Normal], Name = [overcommit-19.17c7718886665d04], Reason = [Created], Message = [Created container overcommit-19] 04/18/24 17:57:02.435 STEP: Considering event: Type = [Normal], Name = [overcommit-19.17c771889244a196], Reason = [Started], Message = [Started container overcommit-19] 04/18/24 17:57:02.435 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17c7718807546632], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-2 to v126-worker] 04/18/24 17:57:02.435 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17c771883d6fb995], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.435 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17c771883e60c5dd], Reason = [Created], Message = [Created container overcommit-2] 04/18/24 17:57:02.436 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17c771884e18d2e3], Reason = [Started], Message = [Started container overcommit-2] 04/18/24 17:57:02.436 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17c7718807973fee], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-3 to v126-worker2] 04/18/24 17:57:02.436 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17c77188861c3612], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.436 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17c7718886e0f78f], Reason = [Created], Message = [Created container overcommit-3] 04/18/24 17:57:02.436 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17c7718892c820a3], Reason = [Started], Message = [Started container overcommit-3] 04/18/24 17:57:02.436 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17c7718807e2e07a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-4 to v126-worker2] 04/18/24 17:57:02.436 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17c77188931ba3f5], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.436 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17c7718893b7cce5], Reason = [Created], Message = [Created container overcommit-4] 04/18/24 17:57:02.436 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17c771889e5375ba], Reason = [Started], Message = [Started container overcommit-4] 04/18/24 17:57:02.437 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17c77188083a568c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-5 to v126-worker] 04/18/24 17:57:02.437 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17c7718892d61034], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.437 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17c77188937fb64c], Reason = [Created], Message = [Created container overcommit-5] 04/18/24 17:57:02.437 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17c771889daa86dd], Reason = [Started], Message = [Started container overcommit-5] 04/18/24 17:57:02.437 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17c7718808898ba6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-6 to v126-worker] 04/18/24 17:57:02.437 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17c7718850697bcc], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.437 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17c77188515d412d], Reason = [Created], Message = [Created container overcommit-6] 04/18/24 17:57:02.437 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17c771885d5da466], Reason = [Started], Message = [Started container overcommit-6] 04/18/24 17:57:02.437 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17c7718808ce49a0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-7 to v126-worker] 04/18/24 17:57:02.437 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17c77188a5e915b4], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.438 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17c77188a6a30f6e], Reason = [Created], Message = [Created container overcommit-7] 04/18/24 17:57:02.438 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17c77188b287c160], Reason = [Started], Message = [Started container overcommit-7] 04/18/24 17:57:02.438 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17c771880923569c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-8 to v126-worker2] 04/18/24 17:57:02.438 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17c7718871da9570], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.438 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17c7718872a2971a], Reason = [Created], Message = [Created container overcommit-8] 04/18/24 17:57:02.438 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17c7718882036e46], Reason = [Started], Message = [Started container overcommit-8] 04/18/24 17:57:02.438 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17c77188098fb9c1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6873/overcommit-9 to v126-worker] 04/18/24 17:57:02.438 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17c77188737bd132], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 17:57:02.438 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17c77188742ba87e], Reason = [Created], Message = [Created container overcommit-9] 04/18/24 17:57:02.438 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17c7718881ee42d5], Reason = [Started], Message = [Started container overcommit-9] 04/18/24 17:57:02.439 STEP: Considering event: Type = [Warning], Name = [additional-pod.17c7718a643f71d0], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient ephemeral-storage. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod..] 04/18/24 17:57:02.443 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 17:57:03.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-6873" for this suite. 04/18/24 17:57:03.457 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones [Serial] test/e2e/scheduling/ubernetes_lite.go:81 [BeforeEach] [sig-scheduling] Multi-AZ Clusters set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 17:57:03.477 Apr 18 17:57:03.477: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename multi-az 04/18/24 17:57:03.479 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 17:57:03.491 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 17:57:03.496 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:51 STEP: Checking for multi-zone cluster. Schedulable zone count = 0 04/18/24 17:57:03.504 Apr 18 17:57:03.505: INFO: Schedulable zone count is 0, only run for multi-zone clusters, skipping test [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/node/init/init.go:32 Apr 18 17:57:03.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:72 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters tear down framework | framework.go:193 STEP: Destroying namespace "multi-az-7219" for this suite. 04/18/24 17:57:03.509 ------------------------------ S [SKIPPED] [0.037 seconds] [sig-scheduling] Multi-AZ Clusters [BeforeEach] test/e2e/scheduling/ubernetes_lite.go:51 should spread the pods of a replication controller across zones [Serial] test/e2e/scheduling/ubernetes_lite.go:81 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] Multi-AZ Clusters set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 17:57:03.477 Apr 18 17:57:03.477: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename multi-az 04/18/24 17:57:03.479 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 17:57:03.491 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 17:57:03.496 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:51 STEP: Checking for multi-zone cluster. Schedulable zone count = 0 04/18/24 17:57:03.504 Apr 18 17:57:03.505: INFO: Schedulable zone count is 0, only run for multi-zone clusters, skipping test [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/node/init/init.go:32 Apr 18 17:57:03.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:72 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters tear down framework | framework.go:193 STEP: Destroying namespace "multi-az-7219" for this suite. 04/18/24 17:57:03.509 << End Captured GinkgoWriter Output Schedulable zone count is 0, only run for multi-zone clusters, skipping test In [BeforeEach] at: test/e2e/scheduling/ubernetes_lite.go:61 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching test/e2e/scheduling/predicates.go:539 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 17:57:03.527 Apr 18 17:57:03.527: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 17:57:03.529 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 17:57:03.539 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 17:57:03.543 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 17:57:03.548: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 17:57:03.557: INFO: Waiting for terminating namespaces to be deleted... Apr 18 17:57:03.560: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 17:57:03.569: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container loopdev ready: true, restart count 0 Apr 18 17:57:03.569: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 17:57:03.569: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 17:57:03.569: INFO: overcommit-10 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container overcommit-10 ready: true, restart count 0 Apr 18 17:57:03.569: INFO: overcommit-13 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container overcommit-13 ready: true, restart count 0 Apr 18 17:57:03.569: INFO: overcommit-14 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container overcommit-14 ready: true, restart count 0 Apr 18 17:57:03.569: INFO: overcommit-15 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container overcommit-15 ready: true, restart count 0 Apr 18 17:57:03.569: INFO: overcommit-16 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container overcommit-16 ready: true, restart count 0 Apr 18 17:57:03.569: INFO: overcommit-2 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container overcommit-2 ready: true, restart count 0 Apr 18 17:57:03.569: INFO: overcommit-5 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container overcommit-5 ready: true, restart count 0 Apr 18 17:57:03.569: INFO: overcommit-6 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container overcommit-6 ready: true, restart count 0 Apr 18 17:57:03.569: INFO: overcommit-7 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container overcommit-7 ready: true, restart count 0 Apr 18 17:57:03.569: INFO: overcommit-9 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container overcommit-9 ready: true, restart count 0 Apr 18 17:57:03.569: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 17:57:03.578: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container loopdev ready: true, restart count 0 Apr 18 17:57:03.578: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 17:57:03.578: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 17:57:03.578: INFO: overcommit-0 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container overcommit-0 ready: true, restart count 0 Apr 18 17:57:03.578: INFO: overcommit-1 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container overcommit-1 ready: true, restart count 0 Apr 18 17:57:03.578: INFO: overcommit-11 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container overcommit-11 ready: true, restart count 0 Apr 18 17:57:03.578: INFO: overcommit-12 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container overcommit-12 ready: true, restart count 0 Apr 18 17:57:03.578: INFO: overcommit-17 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container overcommit-17 ready: true, restart count 0 Apr 18 17:57:03.578: INFO: overcommit-18 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container overcommit-18 ready: true, restart count 0 Apr 18 17:57:03.578: INFO: overcommit-19 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container overcommit-19 ready: true, restart count 0 Apr 18 17:57:03.578: INFO: overcommit-3 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container overcommit-3 ready: true, restart count 0 Apr 18 17:57:03.578: INFO: overcommit-4 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container overcommit-4 ready: true, restart count 0 Apr 18 17:57:03.578: INFO: overcommit-8 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container overcommit-8 ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching test/e2e/scheduling/predicates.go:539 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 17:57:03.578 Apr 18 17:57:03.585: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-3925" to be "running" Apr 18 17:57:03.589: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.072545ms Apr 18 17:57:05.593: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007105725s Apr 18 17:57:05.593: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 17:57:05.596 STEP: Trying to apply a random label on the found node. 04/18/24 17:57:05.604 STEP: verifying the node has the label kubernetes.io/e2e-5a8b6ff5-5695-4dd1-910f-db8e0c4a1158 42 04/18/24 17:57:05.618 STEP: Trying to relaunch the pod, now with labels. 04/18/24 17:57:05.621 Apr 18 17:57:05.626: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-3925" to be "not pending" Apr 18 17:57:05.630: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 3.090973ms Apr 18 17:57:07.634: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 2.0073175s Apr 18 17:57:07.634: INFO: Pod "with-labels" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-5a8b6ff5-5695-4dd1-910f-db8e0c4a1158 off the node v126-worker2 04/18/24 17:57:07.637 STEP: verifying the node doesn't have the label kubernetes.io/e2e-5a8b6ff5-5695-4dd1-910f-db8e0c4a1158 04/18/24 17:57:07.651 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 17:57:07.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-3925" for this suite. 04/18/24 17:57:07.66 ------------------------------ • [4.139 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching test/e2e/scheduling/predicates.go:539 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 17:57:03.527 Apr 18 17:57:03.527: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 17:57:03.529 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 17:57:03.539 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 17:57:03.543 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 17:57:03.548: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 17:57:03.557: INFO: Waiting for terminating namespaces to be deleted... Apr 18 17:57:03.560: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 17:57:03.569: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container loopdev ready: true, restart count 0 Apr 18 17:57:03.569: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 17:57:03.569: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 17:57:03.569: INFO: overcommit-10 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container overcommit-10 ready: true, restart count 0 Apr 18 17:57:03.569: INFO: overcommit-13 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container overcommit-13 ready: true, restart count 0 Apr 18 17:57:03.569: INFO: overcommit-14 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container overcommit-14 ready: true, restart count 0 Apr 18 17:57:03.569: INFO: overcommit-15 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container overcommit-15 ready: true, restart count 0 Apr 18 17:57:03.569: INFO: overcommit-16 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container overcommit-16 ready: true, restart count 0 Apr 18 17:57:03.569: INFO: overcommit-2 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container overcommit-2 ready: true, restart count 0 Apr 18 17:57:03.569: INFO: overcommit-5 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container overcommit-5 ready: true, restart count 0 Apr 18 17:57:03.569: INFO: overcommit-6 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container overcommit-6 ready: true, restart count 0 Apr 18 17:57:03.569: INFO: overcommit-7 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container overcommit-7 ready: true, restart count 0 Apr 18 17:57:03.569: INFO: overcommit-9 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.569: INFO: Container overcommit-9 ready: true, restart count 0 Apr 18 17:57:03.569: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 17:57:03.578: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container loopdev ready: true, restart count 0 Apr 18 17:57:03.578: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 17:57:03.578: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 17:57:03.578: INFO: overcommit-0 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container overcommit-0 ready: true, restart count 0 Apr 18 17:57:03.578: INFO: overcommit-1 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container overcommit-1 ready: true, restart count 0 Apr 18 17:57:03.578: INFO: overcommit-11 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container overcommit-11 ready: true, restart count 0 Apr 18 17:57:03.578: INFO: overcommit-12 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container overcommit-12 ready: true, restart count 0 Apr 18 17:57:03.578: INFO: overcommit-17 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container overcommit-17 ready: true, restart count 0 Apr 18 17:57:03.578: INFO: overcommit-18 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container overcommit-18 ready: true, restart count 0 Apr 18 17:57:03.578: INFO: overcommit-19 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container overcommit-19 ready: true, restart count 0 Apr 18 17:57:03.578: INFO: overcommit-3 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container overcommit-3 ready: true, restart count 0 Apr 18 17:57:03.578: INFO: overcommit-4 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container overcommit-4 ready: true, restart count 0 Apr 18 17:57:03.578: INFO: overcommit-8 from sched-pred-6873 started at 2024-04-18 17:56:52 +0000 UTC (1 container statuses recorded) Apr 18 17:57:03.578: INFO: Container overcommit-8 ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching test/e2e/scheduling/predicates.go:539 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 17:57:03.578 Apr 18 17:57:03.585: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-3925" to be "running" Apr 18 17:57:03.589: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.072545ms Apr 18 17:57:05.593: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007105725s Apr 18 17:57:05.593: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 17:57:05.596 STEP: Trying to apply a random label on the found node. 04/18/24 17:57:05.604 STEP: verifying the node has the label kubernetes.io/e2e-5a8b6ff5-5695-4dd1-910f-db8e0c4a1158 42 04/18/24 17:57:05.618 STEP: Trying to relaunch the pod, now with labels. 04/18/24 17:57:05.621 Apr 18 17:57:05.626: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-3925" to be "not pending" Apr 18 17:57:05.630: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 3.090973ms Apr 18 17:57:07.634: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 2.0073175s Apr 18 17:57:07.634: INFO: Pod "with-labels" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-5a8b6ff5-5695-4dd1-910f-db8e0c4a1158 off the node v126-worker2 04/18/24 17:57:07.637 STEP: verifying the node doesn't have the label kubernetes.io/e2e-5a8b6ff5-5695-4dd1-910f-db8e0c4a1158 04/18/24 17:57:07.651 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 17:57:07.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-3925" for this suite. 04/18/24 17:57:07.66 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed test/e2e/scheduling/priorities.go:288 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 17:57:07.683 Apr 18 17:57:07.683: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 04/18/24 17:57:07.684 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 17:57:07.694 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 17:57:07.697 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 Apr 18 17:57:07.701: INFO: Waiting up to 1m0s for all nodes to be ready Apr 18 17:58:07.729: INFO: Waiting for terminating namespaces to be deleted... Apr 18 17:58:07.732: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 18 17:58:07.746: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 18 17:58:07.746: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 18 17:58:07.753: INFO: ComputeCPUMemFraction for node: v126-worker Apr 18 17:58:07.753: INFO: Pod for on the node: create-loop-devs-w9ldx, Cpu: 100, Mem: 209715200 Apr 18 17:58:07.753: INFO: Pod for on the node: kindnet-68nxx, Cpu: 100, Mem: 52428800 Apr 18 17:58:07.753: INFO: Pod for on the node: kube-proxy-4wtz6, Cpu: 100, Mem: 209715200 Apr 18 17:58:07.753: INFO: Node: v126-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 17:58:07.753: INFO: Node: v126-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 Apr 18 17:58:07.753: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 18 17:58:07.753: INFO: Pod for on the node: create-loop-devs-xnxkn, Cpu: 100, Mem: 209715200 Apr 18 17:58:07.753: INFO: Pod for on the node: kindnet-wqc6h, Cpu: 100, Mem: 52428800 Apr 18 17:58:07.753: INFO: Pod for on the node: kube-proxy-hjqqd, Cpu: 100, Mem: 209715200 Apr 18 17:58:07.753: INFO: Node: v126-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 17:58:07.753: INFO: Node: v126-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 [BeforeEach] PodTopologySpread Scoring test/e2e/scheduling/priorities.go:271 STEP: Trying to get 2 available nodes which can run pod 04/18/24 17:58:07.753 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 17:58:07.753 Apr 18 17:58:07.763: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-priority-9258" to be "running" Apr 18 17:58:07.766: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.941578ms Apr 18 17:58:09.770: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.006745894s Apr 18 17:58:09.770: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 17:58:09.773 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 17:58:09.783 Apr 18 17:58:09.788: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-priority-9258" to be "running" Apr 18 17:58:09.791: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.370395ms Apr 18 17:58:11.795: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007358868s Apr 18 17:58:11.795: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 17:58:11.799 STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. 04/18/24 17:58:11.808 [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed test/e2e/scheduling/priorities.go:288 Apr 18 17:58:11.841: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 18 17:58:11.841: INFO: Pod for on the node: create-loop-devs-xnxkn, Cpu: 100, Mem: 209715200 Apr 18 17:58:11.841: INFO: Pod for on the node: kindnet-wqc6h, Cpu: 100, Mem: 52428800 Apr 18 17:58:11.841: INFO: Pod for on the node: kube-proxy-hjqqd, Cpu: 100, Mem: 209715200 Apr 18 17:58:11.841: INFO: Node: v126-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 17:58:11.841: INFO: Node: v126-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 Apr 18 17:58:11.841: INFO: ComputeCPUMemFraction for node: v126-worker Apr 18 17:58:11.841: INFO: Pod for on the node: create-loop-devs-w9ldx, Cpu: 100, Mem: 209715200 Apr 18 17:58:11.841: INFO: Pod for on the node: kindnet-68nxx, Cpu: 100, Mem: 52428800 Apr 18 17:58:11.841: INFO: Pod for on the node: kube-proxy-4wtz6, Cpu: 100, Mem: 209715200 Apr 18 17:58:11.841: INFO: Node: v126-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 17:58:11.841: INFO: Node: v126-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 Apr 18 17:58:11.847: INFO: Waiting for running... Apr 18 17:58:11.847: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. 04/18/24 17:58:16.906 Apr 18 17:58:16.907: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 18 17:58:16.907: INFO: Pod for on the node: create-loop-devs-xnxkn, Cpu: 100, Mem: 209715200 Apr 18 17:58:16.907: INFO: Pod for on the node: kindnet-wqc6h, Cpu: 100, Mem: 52428800 Apr 18 17:58:16.907: INFO: Pod for on the node: kube-proxy-hjqqd, Cpu: 100, Mem: 209715200 Apr 18 17:58:16.907: INFO: Pod for on the node: 15463f74-d5a0-4faf-a449-cf528e5d3f06-0, Cpu: 43800, Mem: 33561339904 Apr 18 17:58:16.907: INFO: Node: v126-worker2, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 Apr 18 17:58:16.907: INFO: Node: v126-worker2, totalRequestedMemResource: 33718626304, memAllocatableVal: 67412086784, memFraction: 0.5001866566160504 STEP: Compute Cpu, Mem Fraction after create balanced pods. 04/18/24 17:58:16.907 Apr 18 17:58:16.907: INFO: ComputeCPUMemFraction for node: v126-worker Apr 18 17:58:16.907: INFO: Pod for on the node: create-loop-devs-w9ldx, Cpu: 100, Mem: 209715200 Apr 18 17:58:16.907: INFO: Pod for on the node: kindnet-68nxx, Cpu: 100, Mem: 52428800 Apr 18 17:58:16.907: INFO: Pod for on the node: kube-proxy-4wtz6, Cpu: 100, Mem: 209715200 Apr 18 17:58:16.907: INFO: Pod for on the node: c8d55188-3fcf-4b52-9e17-c80c74a5e3ae-0, Cpu: 43800, Mem: 33561339904 Apr 18 17:58:16.907: INFO: Node: v126-worker, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 Apr 18 17:58:16.907: INFO: Node: v126-worker, totalRequestedMemResource: 33718626304, memAllocatableVal: 67412086784, memFraction: 0.5001866566160504 STEP: Run a ReplicaSet with 4 replicas on node "v126-worker2" 04/18/24 17:58:16.907 Apr 18 17:58:20.928: INFO: Waiting up to 1m0s for pod "test-pod" in namespace "sched-priority-9258" to be "running" Apr 18 17:58:20.932: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.222754ms Apr 18 17:58:22.936: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.007534264s Apr 18 17:58:22.936: INFO: Pod "test-pod" satisfied condition "running" STEP: Verifying if the test-pod lands on node "v126-worker" 04/18/24 17:58:22.939 [AfterEach] PodTopologySpread Scoring test/e2e/scheduling/priorities.go:282 STEP: removing the label kubernetes.io/e2e-pts-score off the node v126-worker2 04/18/24 17:58:24.961 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score 04/18/24 17:58:24.976 STEP: removing the label kubernetes.io/e2e-pts-score off the node v126-worker 04/18/24 17:58:24.98 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score 04/18/24 17:58:24.993 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 17:58:24.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-priority-9258" for this suite. 04/18/24 17:58:25 ------------------------------ • [SLOW TEST] [77.322 seconds] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring test/e2e/scheduling/priorities.go:267 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed test/e2e/scheduling/priorities.go:288 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 17:57:07.683 Apr 18 17:57:07.683: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 04/18/24 17:57:07.684 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 17:57:07.694 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 17:57:07.697 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 Apr 18 17:57:07.701: INFO: Waiting up to 1m0s for all nodes to be ready Apr 18 17:58:07.729: INFO: Waiting for terminating namespaces to be deleted... Apr 18 17:58:07.732: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 18 17:58:07.746: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 18 17:58:07.746: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 18 17:58:07.753: INFO: ComputeCPUMemFraction for node: v126-worker Apr 18 17:58:07.753: INFO: Pod for on the node: create-loop-devs-w9ldx, Cpu: 100, Mem: 209715200 Apr 18 17:58:07.753: INFO: Pod for on the node: kindnet-68nxx, Cpu: 100, Mem: 52428800 Apr 18 17:58:07.753: INFO: Pod for on the node: kube-proxy-4wtz6, Cpu: 100, Mem: 209715200 Apr 18 17:58:07.753: INFO: Node: v126-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 17:58:07.753: INFO: Node: v126-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 Apr 18 17:58:07.753: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 18 17:58:07.753: INFO: Pod for on the node: create-loop-devs-xnxkn, Cpu: 100, Mem: 209715200 Apr 18 17:58:07.753: INFO: Pod for on the node: kindnet-wqc6h, Cpu: 100, Mem: 52428800 Apr 18 17:58:07.753: INFO: Pod for on the node: kube-proxy-hjqqd, Cpu: 100, Mem: 209715200 Apr 18 17:58:07.753: INFO: Node: v126-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 17:58:07.753: INFO: Node: v126-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 [BeforeEach] PodTopologySpread Scoring test/e2e/scheduling/priorities.go:271 STEP: Trying to get 2 available nodes which can run pod 04/18/24 17:58:07.753 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 17:58:07.753 Apr 18 17:58:07.763: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-priority-9258" to be "running" Apr 18 17:58:07.766: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.941578ms Apr 18 17:58:09.770: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.006745894s Apr 18 17:58:09.770: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 17:58:09.773 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 17:58:09.783 Apr 18 17:58:09.788: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-priority-9258" to be "running" Apr 18 17:58:09.791: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.370395ms Apr 18 17:58:11.795: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007358868s Apr 18 17:58:11.795: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 17:58:11.799 STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. 04/18/24 17:58:11.808 [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed test/e2e/scheduling/priorities.go:288 Apr 18 17:58:11.841: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 18 17:58:11.841: INFO: Pod for on the node: create-loop-devs-xnxkn, Cpu: 100, Mem: 209715200 Apr 18 17:58:11.841: INFO: Pod for on the node: kindnet-wqc6h, Cpu: 100, Mem: 52428800 Apr 18 17:58:11.841: INFO: Pod for on the node: kube-proxy-hjqqd, Cpu: 100, Mem: 209715200 Apr 18 17:58:11.841: INFO: Node: v126-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 17:58:11.841: INFO: Node: v126-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 Apr 18 17:58:11.841: INFO: ComputeCPUMemFraction for node: v126-worker Apr 18 17:58:11.841: INFO: Pod for on the node: create-loop-devs-w9ldx, Cpu: 100, Mem: 209715200 Apr 18 17:58:11.841: INFO: Pod for on the node: kindnet-68nxx, Cpu: 100, Mem: 52428800 Apr 18 17:58:11.841: INFO: Pod for on the node: kube-proxy-4wtz6, Cpu: 100, Mem: 209715200 Apr 18 17:58:11.841: INFO: Node: v126-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 17:58:11.841: INFO: Node: v126-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 Apr 18 17:58:11.847: INFO: Waiting for running... Apr 18 17:58:11.847: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. 04/18/24 17:58:16.906 Apr 18 17:58:16.907: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 18 17:58:16.907: INFO: Pod for on the node: create-loop-devs-xnxkn, Cpu: 100, Mem: 209715200 Apr 18 17:58:16.907: INFO: Pod for on the node: kindnet-wqc6h, Cpu: 100, Mem: 52428800 Apr 18 17:58:16.907: INFO: Pod for on the node: kube-proxy-hjqqd, Cpu: 100, Mem: 209715200 Apr 18 17:58:16.907: INFO: Pod for on the node: 15463f74-d5a0-4faf-a449-cf528e5d3f06-0, Cpu: 43800, Mem: 33561339904 Apr 18 17:58:16.907: INFO: Node: v126-worker2, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 Apr 18 17:58:16.907: INFO: Node: v126-worker2, totalRequestedMemResource: 33718626304, memAllocatableVal: 67412086784, memFraction: 0.5001866566160504 STEP: Compute Cpu, Mem Fraction after create balanced pods. 04/18/24 17:58:16.907 Apr 18 17:58:16.907: INFO: ComputeCPUMemFraction for node: v126-worker Apr 18 17:58:16.907: INFO: Pod for on the node: create-loop-devs-w9ldx, Cpu: 100, Mem: 209715200 Apr 18 17:58:16.907: INFO: Pod for on the node: kindnet-68nxx, Cpu: 100, Mem: 52428800 Apr 18 17:58:16.907: INFO: Pod for on the node: kube-proxy-4wtz6, Cpu: 100, Mem: 209715200 Apr 18 17:58:16.907: INFO: Pod for on the node: c8d55188-3fcf-4b52-9e17-c80c74a5e3ae-0, Cpu: 43800, Mem: 33561339904 Apr 18 17:58:16.907: INFO: Node: v126-worker, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 Apr 18 17:58:16.907: INFO: Node: v126-worker, totalRequestedMemResource: 33718626304, memAllocatableVal: 67412086784, memFraction: 0.5001866566160504 STEP: Run a ReplicaSet with 4 replicas on node "v126-worker2" 04/18/24 17:58:16.907 Apr 18 17:58:20.928: INFO: Waiting up to 1m0s for pod "test-pod" in namespace "sched-priority-9258" to be "running" Apr 18 17:58:20.932: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.222754ms Apr 18 17:58:22.936: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.007534264s Apr 18 17:58:22.936: INFO: Pod "test-pod" satisfied condition "running" STEP: Verifying if the test-pod lands on node "v126-worker" 04/18/24 17:58:22.939 [AfterEach] PodTopologySpread Scoring test/e2e/scheduling/priorities.go:282 STEP: removing the label kubernetes.io/e2e-pts-score off the node v126-worker2 04/18/24 17:58:24.961 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score 04/18/24 17:58:24.976 STEP: removing the label kubernetes.io/e2e-pts-score off the node v126-worker 04/18/24 17:58:24.98 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score 04/18/24 17:58:24.993 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 17:58:24.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-priority-9258" for this suite. 04/18/24 17:58:25 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Serial] test/e2e/scheduling/ubernetes_lite.go:77 [BeforeEach] [sig-scheduling] Multi-AZ Clusters set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 17:58:25.007 Apr 18 17:58:25.007: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename multi-az 04/18/24 17:58:25.009 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 17:58:25.019 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 17:58:25.023 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:51 STEP: Checking for multi-zone cluster. Schedulable zone count = 0 04/18/24 17:58:25.03 Apr 18 17:58:25.030: INFO: Schedulable zone count is 0, only run for multi-zone clusters, skipping test [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/node/init/init.go:32 Apr 18 17:58:25.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:72 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters tear down framework | framework.go:193 STEP: Destroying namespace "multi-az-2975" for this suite. 04/18/24 17:58:25.034 ------------------------------ S [SKIPPED] [0.030 seconds] [sig-scheduling] Multi-AZ Clusters [BeforeEach] test/e2e/scheduling/ubernetes_lite.go:51 should spread the pods of a service across zones [Serial] test/e2e/scheduling/ubernetes_lite.go:77 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] Multi-AZ Clusters set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 17:58:25.007 Apr 18 17:58:25.007: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename multi-az 04/18/24 17:58:25.009 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 17:58:25.019 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 17:58:25.023 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:51 STEP: Checking for multi-zone cluster. Schedulable zone count = 0 04/18/24 17:58:25.03 Apr 18 17:58:25.030: INFO: Schedulable zone count is 0, only run for multi-zone clusters, skipping test [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/node/init/init.go:32 Apr 18 17:58:25.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:72 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters tear down framework | framework.go:193 STEP: Destroying namespace "multi-az-2975" for this suite. 04/18/24 17:58:25.034 << End Captured GinkgoWriter Output Schedulable zone count is 0, only run for multi-zone clusters, skipping test In [BeforeEach] at: test/e2e/scheduling/ubernetes_lite.go:61 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate test/e2e/scheduling/priorities.go:208 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 17:58:25.045 Apr 18 17:58:25.045: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 04/18/24 17:58:25.046 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 17:58:25.055 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 17:58:25.058 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 Apr 18 17:58:25.062: INFO: Waiting up to 1m0s for all nodes to be ready Apr 18 17:59:25.088: INFO: Waiting for terminating namespaces to be deleted... Apr 18 17:59:25.092: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 18 17:59:25.106: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 18 17:59:25.106: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 18 17:59:25.113: INFO: ComputeCPUMemFraction for node: v126-worker Apr 18 17:59:25.113: INFO: Pod for on the node: create-loop-devs-w9ldx, Cpu: 100, Mem: 209715200 Apr 18 17:59:25.113: INFO: Pod for on the node: kindnet-68nxx, Cpu: 100, Mem: 52428800 Apr 18 17:59:25.113: INFO: Pod for on the node: kube-proxy-4wtz6, Cpu: 100, Mem: 209715200 Apr 18 17:59:25.113: INFO: Node: v126-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 17:59:25.113: INFO: Node: v126-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 Apr 18 17:59:25.113: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 18 17:59:25.113: INFO: Pod for on the node: create-loop-devs-xnxkn, Cpu: 100, Mem: 209715200 Apr 18 17:59:25.113: INFO: Pod for on the node: kindnet-wqc6h, Cpu: 100, Mem: 52428800 Apr 18 17:59:25.113: INFO: Pod for on the node: kube-proxy-hjqqd, Cpu: 100, Mem: 209715200 Apr 18 17:59:25.113: INFO: Node: v126-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 17:59:25.113: INFO: Node: v126-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 [It] Pod should be preferably scheduled to nodes pod can tolerate test/e2e/scheduling/priorities.go:208 Apr 18 17:59:25.121: INFO: ComputeCPUMemFraction for node: v126-worker Apr 18 17:59:25.121: INFO: Pod for on the node: create-loop-devs-w9ldx, Cpu: 100, Mem: 209715200 Apr 18 17:59:25.121: INFO: Pod for on the node: kindnet-68nxx, Cpu: 100, Mem: 52428800 Apr 18 17:59:25.121: INFO: Pod for on the node: kube-proxy-4wtz6, Cpu: 100, Mem: 209715200 Apr 18 17:59:25.121: INFO: Node: v126-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 17:59:25.121: INFO: Node: v126-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 Apr 18 17:59:25.121: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 18 17:59:25.121: INFO: Pod for on the node: create-loop-devs-xnxkn, Cpu: 100, Mem: 209715200 Apr 18 17:59:25.121: INFO: Pod for on the node: kindnet-wqc6h, Cpu: 100, Mem: 52428800 Apr 18 17:59:25.121: INFO: Pod for on the node: kube-proxy-hjqqd, Cpu: 100, Mem: 209715200 Apr 18 17:59:25.121: INFO: Node: v126-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 17:59:25.121: INFO: Node: v126-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 Apr 18 17:59:25.132: INFO: Waiting for running... Apr 18 17:59:25.133: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. 04/18/24 17:59:30.193 Apr 18 17:59:30.193: INFO: ComputeCPUMemFraction for node: v126-worker Apr 18 17:59:30.193: INFO: Pod for on the node: create-loop-devs-w9ldx, Cpu: 100, Mem: 209715200 Apr 18 17:59:30.193: INFO: Pod for on the node: kindnet-68nxx, Cpu: 100, Mem: 52428800 Apr 18 17:59:30.193: INFO: Pod for on the node: kube-proxy-4wtz6, Cpu: 100, Mem: 209715200 Apr 18 17:59:30.193: INFO: Pod for on the node: 4c5fbca8-2fa0-48ce-af01-7e37d608b74f-0, Cpu: 43800, Mem: 33561339904 Apr 18 17:59:30.193: INFO: Node: v126-worker, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 Apr 18 17:59:30.193: INFO: Node: v126-worker, totalRequestedMemResource: 33718626304, memAllocatableVal: 67412086784, memFraction: 0.5001866566160504 STEP: Compute Cpu, Mem Fraction after create balanced pods. 04/18/24 17:59:30.193 Apr 18 17:59:30.193: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 18 17:59:30.193: INFO: Pod for on the node: create-loop-devs-xnxkn, Cpu: 100, Mem: 209715200 Apr 18 17:59:30.193: INFO: Pod for on the node: kindnet-wqc6h, Cpu: 100, Mem: 52428800 Apr 18 17:59:30.193: INFO: Pod for on the node: kube-proxy-hjqqd, Cpu: 100, Mem: 209715200 Apr 18 17:59:30.193: INFO: Pod for on the node: 112705e3-5ef1-4189-b002-90410ba5fe46-0, Cpu: 43800, Mem: 33561339904 Apr 18 17:59:30.193: INFO: Node: v126-worker2, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 Apr 18 17:59:30.193: INFO: Node: v126-worker2, totalRequestedMemResource: 33718626304, memAllocatableVal: 67412086784, memFraction: 0.5001866566160504 STEP: Trying to apply 10 (tolerable) taints on the first node. 04/18/24 17:59:30.193 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-e7cc4073-964e-47a9-96c9=testing-taint-value-6fc8d0b3-efe1-4450-a9b3-462f54f48443:PreferNoSchedule 04/18/24 17:59:30.21 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-dbd5e7ba-c458-498c-b5c7=testing-taint-value-74efdc91-ec90-4276-a84f-d429ee789afe:PreferNoSchedule 04/18/24 17:59:30.229 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4b6d3b93-2c27-4cb5-ba32=testing-taint-value-4f882119-f376-4f54-8ed4-019a6e51810b:PreferNoSchedule 04/18/24 17:59:30.249 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-0c048559-8b8e-49e0-bc78=testing-taint-value-90b6e6ea-eaa9-4d39-b954-b59c7d6cfc95:PreferNoSchedule 04/18/24 17:59:30.268 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-40493098-a42b-477f-aa84=testing-taint-value-723a4d22-f40c-4a2d-9281-31cceb561d82:PreferNoSchedule 04/18/24 17:59:30.287 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-0c6c0be0-ace3-474d-aad9=testing-taint-value-5a161732-2535-4237-bef7-c771a0e59262:PreferNoSchedule 04/18/24 17:59:30.306 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-d297a96e-6583-47c6-bf50=testing-taint-value-1c486446-8ecb-4926-b4bf-c607ec0543f5:PreferNoSchedule 04/18/24 17:59:30.325 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-e58a4be0-d1b6-416a-aca3=testing-taint-value-cff49141-2cf7-4cc2-ac4f-51456c64d480:PreferNoSchedule 04/18/24 17:59:30.344 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-6a7fabbf-6cc0-4fbb-b0e0=testing-taint-value-6b4f96bf-b05c-4ac3-86ea-51453c182ea1:PreferNoSchedule 04/18/24 17:59:30.364 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ab032c34-8b13-4461-932a=testing-taint-value-71500e82-5328-4003-97ec-833b70963e53:PreferNoSchedule 04/18/24 17:59:30.384 STEP: Adding 10 intolerable taints to all other nodes 04/18/24 17:59:30.394 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4de6c8d2-aa15-4a8c-935e=testing-taint-value-0872aa73-d858-4cf7-91d9-38e4b880a945:PreferNoSchedule 04/18/24 17:59:30.408 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-fd0cc0e6-d315-42dc-aba2=testing-taint-value-f617c67b-7bd3-411b-8385-a3f1fa86221f:PreferNoSchedule 04/18/24 17:59:30.428 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-e0356735-b4a2-4fd0-bca6=testing-taint-value-849ab084-99b4-4b63-85f0-b5dc105b2ee0:PreferNoSchedule 04/18/24 17:59:30.447 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-1096d659-fa3b-413c-84e1=testing-taint-value-2ce6bff1-2889-4e4a-8344-d2a6cf9f55ce:PreferNoSchedule 04/18/24 17:59:30.466 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-5af1b63e-4a7a-43e9-87c1=testing-taint-value-74f22f68-2752-4893-aba5-9db291c03bd7:PreferNoSchedule 04/18/24 17:59:30.486 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-76eb19b2-373c-4dd1-aae5=testing-taint-value-4c24219c-622a-4641-83db-a7906a238ff7:PreferNoSchedule 04/18/24 17:59:30.505 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ff5547d6-f2a4-4eaf-bb78=testing-taint-value-37373f25-c6df-4ca2-bf8d-024811d39d5a:PreferNoSchedule 04/18/24 17:59:30.524 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-8eb26dc7-474e-4cf3-9672=testing-taint-value-1ff71597-f59f-48bb-baf0-74897ed6adf6:PreferNoSchedule 04/18/24 17:59:30.544 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-3912d640-f62b-4481-b32e=testing-taint-value-735cb572-4453-423e-a4b5-b77bb6a0324d:PreferNoSchedule 04/18/24 17:59:30.585 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-300c6a68-2411-496e-841a=testing-taint-value-41bf135b-e92a-4916-992e-1c8a3416164c:PreferNoSchedule 04/18/24 17:59:30.699 STEP: Create a pod that tolerates all the taints of the first node. 04/18/24 17:59:30.739 Apr 18 17:59:30.790: INFO: Waiting up to 5m0s for pod "with-tolerations" in namespace "sched-priority-7870" to be "running" Apr 18 17:59:30.838: INFO: Pod "with-tolerations": Phase="Pending", Reason="", readiness=false. Elapsed: 47.79492ms Apr 18 17:59:32.842: INFO: Pod "with-tolerations": Phase="Running", Reason="", readiness=true. Elapsed: 2.051796268s Apr 18 17:59:32.842: INFO: Pod "with-tolerations" satisfied condition "running" STEP: Pod should prefer scheduled to the node that pod can tolerate. 04/18/24 17:59:32.842 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4de6c8d2-aa15-4a8c-935e=testing-taint-value-0872aa73-d858-4cf7-91d9-38e4b880a945:PreferNoSchedule 04/18/24 17:59:32.864 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-fd0cc0e6-d315-42dc-aba2=testing-taint-value-f617c67b-7bd3-411b-8385-a3f1fa86221f:PreferNoSchedule 04/18/24 17:59:32.882 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-e0356735-b4a2-4fd0-bca6=testing-taint-value-849ab084-99b4-4b63-85f0-b5dc105b2ee0:PreferNoSchedule 04/18/24 17:59:32.902 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-1096d659-fa3b-413c-84e1=testing-taint-value-2ce6bff1-2889-4e4a-8344-d2a6cf9f55ce:PreferNoSchedule 04/18/24 17:59:32.921 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-5af1b63e-4a7a-43e9-87c1=testing-taint-value-74f22f68-2752-4893-aba5-9db291c03bd7:PreferNoSchedule 04/18/24 17:59:32.94 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-76eb19b2-373c-4dd1-aae5=testing-taint-value-4c24219c-622a-4641-83db-a7906a238ff7:PreferNoSchedule 04/18/24 17:59:32.959 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ff5547d6-f2a4-4eaf-bb78=testing-taint-value-37373f25-c6df-4ca2-bf8d-024811d39d5a:PreferNoSchedule 04/18/24 17:59:32.978 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-8eb26dc7-474e-4cf3-9672=testing-taint-value-1ff71597-f59f-48bb-baf0-74897ed6adf6:PreferNoSchedule 04/18/24 17:59:32.996 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-3912d640-f62b-4481-b32e=testing-taint-value-735cb572-4453-423e-a4b5-b77bb6a0324d:PreferNoSchedule 04/18/24 17:59:33.015 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-300c6a68-2411-496e-841a=testing-taint-value-41bf135b-e92a-4916-992e-1c8a3416164c:PreferNoSchedule 04/18/24 17:59:33.033 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-e7cc4073-964e-47a9-96c9=testing-taint-value-6fc8d0b3-efe1-4450-a9b3-462f54f48443:PreferNoSchedule 04/18/24 17:59:33.052 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-dbd5e7ba-c458-498c-b5c7=testing-taint-value-74efdc91-ec90-4276-a84f-d429ee789afe:PreferNoSchedule 04/18/24 17:59:33.071 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4b6d3b93-2c27-4cb5-ba32=testing-taint-value-4f882119-f376-4f54-8ed4-019a6e51810b:PreferNoSchedule 04/18/24 17:59:33.091 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-0c048559-8b8e-49e0-bc78=testing-taint-value-90b6e6ea-eaa9-4d39-b954-b59c7d6cfc95:PreferNoSchedule 04/18/24 17:59:33.11 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-40493098-a42b-477f-aa84=testing-taint-value-723a4d22-f40c-4a2d-9281-31cceb561d82:PreferNoSchedule 04/18/24 17:59:33.146 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-0c6c0be0-ace3-474d-aad9=testing-taint-value-5a161732-2535-4237-bef7-c771a0e59262:PreferNoSchedule 04/18/24 17:59:33.295 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-d297a96e-6583-47c6-bf50=testing-taint-value-1c486446-8ecb-4926-b4bf-c607ec0543f5:PreferNoSchedule 04/18/24 17:59:33.445 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-e58a4be0-d1b6-416a-aca3=testing-taint-value-cff49141-2cf7-4cc2-ac4f-51456c64d480:PreferNoSchedule 04/18/24 17:59:33.596 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-6a7fabbf-6cc0-4fbb-b0e0=testing-taint-value-6b4f96bf-b05c-4ac3-86ea-51453c182ea1:PreferNoSchedule 04/18/24 17:59:33.745 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ab032c34-8b13-4461-932a=testing-taint-value-71500e82-5328-4003-97ec-833b70963e53:PreferNoSchedule 04/18/24 17:59:33.896 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 17:59:36.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-priority-7870" for this suite. 04/18/24 17:59:36.049 ------------------------------ • [SLOW TEST] [71.011 seconds] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate test/e2e/scheduling/priorities.go:208 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 17:58:25.045 Apr 18 17:58:25.045: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 04/18/24 17:58:25.046 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 17:58:25.055 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 17:58:25.058 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 Apr 18 17:58:25.062: INFO: Waiting up to 1m0s for all nodes to be ready Apr 18 17:59:25.088: INFO: Waiting for terminating namespaces to be deleted... Apr 18 17:59:25.092: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 18 17:59:25.106: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 18 17:59:25.106: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 18 17:59:25.113: INFO: ComputeCPUMemFraction for node: v126-worker Apr 18 17:59:25.113: INFO: Pod for on the node: create-loop-devs-w9ldx, Cpu: 100, Mem: 209715200 Apr 18 17:59:25.113: INFO: Pod for on the node: kindnet-68nxx, Cpu: 100, Mem: 52428800 Apr 18 17:59:25.113: INFO: Pod for on the node: kube-proxy-4wtz6, Cpu: 100, Mem: 209715200 Apr 18 17:59:25.113: INFO: Node: v126-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 17:59:25.113: INFO: Node: v126-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 Apr 18 17:59:25.113: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 18 17:59:25.113: INFO: Pod for on the node: create-loop-devs-xnxkn, Cpu: 100, Mem: 209715200 Apr 18 17:59:25.113: INFO: Pod for on the node: kindnet-wqc6h, Cpu: 100, Mem: 52428800 Apr 18 17:59:25.113: INFO: Pod for on the node: kube-proxy-hjqqd, Cpu: 100, Mem: 209715200 Apr 18 17:59:25.113: INFO: Node: v126-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 17:59:25.113: INFO: Node: v126-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 [It] Pod should be preferably scheduled to nodes pod can tolerate test/e2e/scheduling/priorities.go:208 Apr 18 17:59:25.121: INFO: ComputeCPUMemFraction for node: v126-worker Apr 18 17:59:25.121: INFO: Pod for on the node: create-loop-devs-w9ldx, Cpu: 100, Mem: 209715200 Apr 18 17:59:25.121: INFO: Pod for on the node: kindnet-68nxx, Cpu: 100, Mem: 52428800 Apr 18 17:59:25.121: INFO: Pod for on the node: kube-proxy-4wtz6, Cpu: 100, Mem: 209715200 Apr 18 17:59:25.121: INFO: Node: v126-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 17:59:25.121: INFO: Node: v126-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 Apr 18 17:59:25.121: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 18 17:59:25.121: INFO: Pod for on the node: create-loop-devs-xnxkn, Cpu: 100, Mem: 209715200 Apr 18 17:59:25.121: INFO: Pod for on the node: kindnet-wqc6h, Cpu: 100, Mem: 52428800 Apr 18 17:59:25.121: INFO: Pod for on the node: kube-proxy-hjqqd, Cpu: 100, Mem: 209715200 Apr 18 17:59:25.121: INFO: Node: v126-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 17:59:25.121: INFO: Node: v126-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 Apr 18 17:59:25.132: INFO: Waiting for running... Apr 18 17:59:25.133: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. 04/18/24 17:59:30.193 Apr 18 17:59:30.193: INFO: ComputeCPUMemFraction for node: v126-worker Apr 18 17:59:30.193: INFO: Pod for on the node: create-loop-devs-w9ldx, Cpu: 100, Mem: 209715200 Apr 18 17:59:30.193: INFO: Pod for on the node: kindnet-68nxx, Cpu: 100, Mem: 52428800 Apr 18 17:59:30.193: INFO: Pod for on the node: kube-proxy-4wtz6, Cpu: 100, Mem: 209715200 Apr 18 17:59:30.193: INFO: Pod for on the node: 4c5fbca8-2fa0-48ce-af01-7e37d608b74f-0, Cpu: 43800, Mem: 33561339904 Apr 18 17:59:30.193: INFO: Node: v126-worker, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 Apr 18 17:59:30.193: INFO: Node: v126-worker, totalRequestedMemResource: 33718626304, memAllocatableVal: 67412086784, memFraction: 0.5001866566160504 STEP: Compute Cpu, Mem Fraction after create balanced pods. 04/18/24 17:59:30.193 Apr 18 17:59:30.193: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 18 17:59:30.193: INFO: Pod for on the node: create-loop-devs-xnxkn, Cpu: 100, Mem: 209715200 Apr 18 17:59:30.193: INFO: Pod for on the node: kindnet-wqc6h, Cpu: 100, Mem: 52428800 Apr 18 17:59:30.193: INFO: Pod for on the node: kube-proxy-hjqqd, Cpu: 100, Mem: 209715200 Apr 18 17:59:30.193: INFO: Pod for on the node: 112705e3-5ef1-4189-b002-90410ba5fe46-0, Cpu: 43800, Mem: 33561339904 Apr 18 17:59:30.193: INFO: Node: v126-worker2, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 Apr 18 17:59:30.193: INFO: Node: v126-worker2, totalRequestedMemResource: 33718626304, memAllocatableVal: 67412086784, memFraction: 0.5001866566160504 STEP: Trying to apply 10 (tolerable) taints on the first node. 04/18/24 17:59:30.193 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-e7cc4073-964e-47a9-96c9=testing-taint-value-6fc8d0b3-efe1-4450-a9b3-462f54f48443:PreferNoSchedule 04/18/24 17:59:30.21 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-dbd5e7ba-c458-498c-b5c7=testing-taint-value-74efdc91-ec90-4276-a84f-d429ee789afe:PreferNoSchedule 04/18/24 17:59:30.229 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4b6d3b93-2c27-4cb5-ba32=testing-taint-value-4f882119-f376-4f54-8ed4-019a6e51810b:PreferNoSchedule 04/18/24 17:59:30.249 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-0c048559-8b8e-49e0-bc78=testing-taint-value-90b6e6ea-eaa9-4d39-b954-b59c7d6cfc95:PreferNoSchedule 04/18/24 17:59:30.268 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-40493098-a42b-477f-aa84=testing-taint-value-723a4d22-f40c-4a2d-9281-31cceb561d82:PreferNoSchedule 04/18/24 17:59:30.287 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-0c6c0be0-ace3-474d-aad9=testing-taint-value-5a161732-2535-4237-bef7-c771a0e59262:PreferNoSchedule 04/18/24 17:59:30.306 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-d297a96e-6583-47c6-bf50=testing-taint-value-1c486446-8ecb-4926-b4bf-c607ec0543f5:PreferNoSchedule 04/18/24 17:59:30.325 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-e58a4be0-d1b6-416a-aca3=testing-taint-value-cff49141-2cf7-4cc2-ac4f-51456c64d480:PreferNoSchedule 04/18/24 17:59:30.344 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-6a7fabbf-6cc0-4fbb-b0e0=testing-taint-value-6b4f96bf-b05c-4ac3-86ea-51453c182ea1:PreferNoSchedule 04/18/24 17:59:30.364 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ab032c34-8b13-4461-932a=testing-taint-value-71500e82-5328-4003-97ec-833b70963e53:PreferNoSchedule 04/18/24 17:59:30.384 STEP: Adding 10 intolerable taints to all other nodes 04/18/24 17:59:30.394 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4de6c8d2-aa15-4a8c-935e=testing-taint-value-0872aa73-d858-4cf7-91d9-38e4b880a945:PreferNoSchedule 04/18/24 17:59:30.408 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-fd0cc0e6-d315-42dc-aba2=testing-taint-value-f617c67b-7bd3-411b-8385-a3f1fa86221f:PreferNoSchedule 04/18/24 17:59:30.428 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-e0356735-b4a2-4fd0-bca6=testing-taint-value-849ab084-99b4-4b63-85f0-b5dc105b2ee0:PreferNoSchedule 04/18/24 17:59:30.447 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-1096d659-fa3b-413c-84e1=testing-taint-value-2ce6bff1-2889-4e4a-8344-d2a6cf9f55ce:PreferNoSchedule 04/18/24 17:59:30.466 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-5af1b63e-4a7a-43e9-87c1=testing-taint-value-74f22f68-2752-4893-aba5-9db291c03bd7:PreferNoSchedule 04/18/24 17:59:30.486 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-76eb19b2-373c-4dd1-aae5=testing-taint-value-4c24219c-622a-4641-83db-a7906a238ff7:PreferNoSchedule 04/18/24 17:59:30.505 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ff5547d6-f2a4-4eaf-bb78=testing-taint-value-37373f25-c6df-4ca2-bf8d-024811d39d5a:PreferNoSchedule 04/18/24 17:59:30.524 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-8eb26dc7-474e-4cf3-9672=testing-taint-value-1ff71597-f59f-48bb-baf0-74897ed6adf6:PreferNoSchedule 04/18/24 17:59:30.544 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-3912d640-f62b-4481-b32e=testing-taint-value-735cb572-4453-423e-a4b5-b77bb6a0324d:PreferNoSchedule 04/18/24 17:59:30.585 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-300c6a68-2411-496e-841a=testing-taint-value-41bf135b-e92a-4916-992e-1c8a3416164c:PreferNoSchedule 04/18/24 17:59:30.699 STEP: Create a pod that tolerates all the taints of the first node. 04/18/24 17:59:30.739 Apr 18 17:59:30.790: INFO: Waiting up to 5m0s for pod "with-tolerations" in namespace "sched-priority-7870" to be "running" Apr 18 17:59:30.838: INFO: Pod "with-tolerations": Phase="Pending", Reason="", readiness=false. Elapsed: 47.79492ms Apr 18 17:59:32.842: INFO: Pod "with-tolerations": Phase="Running", Reason="", readiness=true. Elapsed: 2.051796268s Apr 18 17:59:32.842: INFO: Pod "with-tolerations" satisfied condition "running" STEP: Pod should prefer scheduled to the node that pod can tolerate. 04/18/24 17:59:32.842 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4de6c8d2-aa15-4a8c-935e=testing-taint-value-0872aa73-d858-4cf7-91d9-38e4b880a945:PreferNoSchedule 04/18/24 17:59:32.864 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-fd0cc0e6-d315-42dc-aba2=testing-taint-value-f617c67b-7bd3-411b-8385-a3f1fa86221f:PreferNoSchedule 04/18/24 17:59:32.882 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-e0356735-b4a2-4fd0-bca6=testing-taint-value-849ab084-99b4-4b63-85f0-b5dc105b2ee0:PreferNoSchedule 04/18/24 17:59:32.902 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-1096d659-fa3b-413c-84e1=testing-taint-value-2ce6bff1-2889-4e4a-8344-d2a6cf9f55ce:PreferNoSchedule 04/18/24 17:59:32.921 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-5af1b63e-4a7a-43e9-87c1=testing-taint-value-74f22f68-2752-4893-aba5-9db291c03bd7:PreferNoSchedule 04/18/24 17:59:32.94 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-76eb19b2-373c-4dd1-aae5=testing-taint-value-4c24219c-622a-4641-83db-a7906a238ff7:PreferNoSchedule 04/18/24 17:59:32.959 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ff5547d6-f2a4-4eaf-bb78=testing-taint-value-37373f25-c6df-4ca2-bf8d-024811d39d5a:PreferNoSchedule 04/18/24 17:59:32.978 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-8eb26dc7-474e-4cf3-9672=testing-taint-value-1ff71597-f59f-48bb-baf0-74897ed6adf6:PreferNoSchedule 04/18/24 17:59:32.996 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-3912d640-f62b-4481-b32e=testing-taint-value-735cb572-4453-423e-a4b5-b77bb6a0324d:PreferNoSchedule 04/18/24 17:59:33.015 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-300c6a68-2411-496e-841a=testing-taint-value-41bf135b-e92a-4916-992e-1c8a3416164c:PreferNoSchedule 04/18/24 17:59:33.033 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-e7cc4073-964e-47a9-96c9=testing-taint-value-6fc8d0b3-efe1-4450-a9b3-462f54f48443:PreferNoSchedule 04/18/24 17:59:33.052 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-dbd5e7ba-c458-498c-b5c7=testing-taint-value-74efdc91-ec90-4276-a84f-d429ee789afe:PreferNoSchedule 04/18/24 17:59:33.071 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4b6d3b93-2c27-4cb5-ba32=testing-taint-value-4f882119-f376-4f54-8ed4-019a6e51810b:PreferNoSchedule 04/18/24 17:59:33.091 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-0c048559-8b8e-49e0-bc78=testing-taint-value-90b6e6ea-eaa9-4d39-b954-b59c7d6cfc95:PreferNoSchedule 04/18/24 17:59:33.11 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-40493098-a42b-477f-aa84=testing-taint-value-723a4d22-f40c-4a2d-9281-31cceb561d82:PreferNoSchedule 04/18/24 17:59:33.146 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-0c6c0be0-ace3-474d-aad9=testing-taint-value-5a161732-2535-4237-bef7-c771a0e59262:PreferNoSchedule 04/18/24 17:59:33.295 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-d297a96e-6583-47c6-bf50=testing-taint-value-1c486446-8ecb-4926-b4bf-c607ec0543f5:PreferNoSchedule 04/18/24 17:59:33.445 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-e58a4be0-d1b6-416a-aca3=testing-taint-value-cff49141-2cf7-4cc2-ac4f-51456c64d480:PreferNoSchedule 04/18/24 17:59:33.596 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-6a7fabbf-6cc0-4fbb-b0e0=testing-taint-value-6b4f96bf-b05c-4ac3-86ea-51453c182ea1:PreferNoSchedule 04/18/24 17:59:33.745 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ab032c34-8b13-4461-932a=testing-taint-value-71500e82-5328-4003-97ec-833b70963e53:PreferNoSchedule 04/18/24 17:59:33.896 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 17:59:36.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-priority-7870" for this suite. 04/18/24 17:59:36.049 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates pod disruption condition is added to the preempted pod test/e2e/scheduling/preemption.go:327 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 17:59:36.11 Apr 18 17:59:36.110: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/18/24 17:59:36.112 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 17:59:36.124 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 17:59:36.128 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 18 17:59:36.144: INFO: Waiting up to 1m0s for all nodes to be ready Apr 18 18:00:36.170: INFO: Waiting for terminating namespaces to be deleted... [It] validates pod disruption condition is added to the preempted pod test/e2e/scheduling/preemption.go:327 STEP: Select a node to run the lower and higher priority pods 04/18/24 18:00:36.173 STEP: Create a low priority pod that consumes 1/1 of node resources 04/18/24 18:00:36.186 Apr 18 18:00:36.197: INFO: Created pod: victim-pod STEP: Wait for the victim pod to be scheduled 04/18/24 18:00:36.197 Apr 18 18:00:36.198: INFO: Waiting up to 5m0s for pod "victim-pod" in namespace "sched-preemption-5057" to be "running" Apr 18 18:00:36.201: INFO: Pod "victim-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.241083ms Apr 18 18:00:38.206: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008674327s Apr 18 18:00:38.206: INFO: Pod "victim-pod" satisfied condition "running" STEP: Create a high priority pod to trigger preemption of the lower priority pod 04/18/24 18:00:38.206 Apr 18 18:00:38.213: INFO: Created pod: preemptor-pod STEP: Waiting for the victim pod to be terminating 04/18/24 18:00:38.213 Apr 18 18:00:38.213: INFO: Waiting up to 5m0s for pod "victim-pod" in namespace "sched-preemption-5057" to be "is terminating" Apr 18 18:00:38.217: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3.439925ms Apr 18 18:00:40.222: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008120174s Apr 18 18:00:40.222: INFO: Pod "victim-pod" satisfied condition "is terminating" STEP: Verifying the pod has the pod disruption condition 04/18/24 18:00:40.222 Apr 18 18:00:40.225: INFO: Removing pod's "victim-pod" finalizer: "example.com/test-finalizer" Apr 18 18:00:40.740: INFO: Successfully updated pod "victim-pod" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 18:00:40.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-5057" for this suite. 04/18/24 18:00:40.784 ------------------------------ • [SLOW TEST] [64.680 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 validates pod disruption condition is added to the preempted pod test/e2e/scheduling/preemption.go:327 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 17:59:36.11 Apr 18 17:59:36.110: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/18/24 17:59:36.112 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 17:59:36.124 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 17:59:36.128 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 18 17:59:36.144: INFO: Waiting up to 1m0s for all nodes to be ready Apr 18 18:00:36.170: INFO: Waiting for terminating namespaces to be deleted... [It] validates pod disruption condition is added to the preempted pod test/e2e/scheduling/preemption.go:327 STEP: Select a node to run the lower and higher priority pods 04/18/24 18:00:36.173 STEP: Create a low priority pod that consumes 1/1 of node resources 04/18/24 18:00:36.186 Apr 18 18:00:36.197: INFO: Created pod: victim-pod STEP: Wait for the victim pod to be scheduled 04/18/24 18:00:36.197 Apr 18 18:00:36.198: INFO: Waiting up to 5m0s for pod "victim-pod" in namespace "sched-preemption-5057" to be "running" Apr 18 18:00:36.201: INFO: Pod "victim-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.241083ms Apr 18 18:00:38.206: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008674327s Apr 18 18:00:38.206: INFO: Pod "victim-pod" satisfied condition "running" STEP: Create a high priority pod to trigger preemption of the lower priority pod 04/18/24 18:00:38.206 Apr 18 18:00:38.213: INFO: Created pod: preemptor-pod STEP: Waiting for the victim pod to be terminating 04/18/24 18:00:38.213 Apr 18 18:00:38.213: INFO: Waiting up to 5m0s for pod "victim-pod" in namespace "sched-preemption-5057" to be "is terminating" Apr 18 18:00:38.217: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3.439925ms Apr 18 18:00:40.222: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008120174s Apr 18 18:00:40.222: INFO: Pod "victim-pod" satisfied condition "is terminating" STEP: Verifying the pod has the pod disruption condition 04/18/24 18:00:40.222 Apr 18 18:00:40.225: INFO: Removing pod's "victim-pod" finalizer: "example.com/test-finalizer" Apr 18 18:00:40.740: INFO: Successfully updated pod "victim-pod" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 18:00:40.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-5057" for this suite. 04/18/24 18:00:40.784 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes test/e2e/scheduling/predicates.go:748 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:00:40.828 Apr 18 18:00:40.828: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 18:00:40.83 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:00:40.841 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:00:40.845 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 18:00:40.849: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 18:00:40.865: INFO: Waiting for terminating namespaces to be deleted... Apr 18 18:00:40.873: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 18:00:40.886: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:00:40.886: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:00:40.886: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:40.886: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:00:40.886: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:40.886: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 18:00:40.886: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 18:00:40.892: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:00:40.892: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:00:40.892: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:40.892: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:00:40.892: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:40.892: INFO: Container kube-proxy ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering test/e2e/scheduling/predicates.go:731 STEP: Trying to get 2 available nodes which can run pod 04/18/24 18:00:40.892 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 18:00:40.893 Apr 18 18:00:40.908: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-5437" to be "running" Apr 18 18:00:40.914: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 5.952384ms Apr 18 18:00:42.919: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.010919368s Apr 18 18:00:42.919: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 18:00:42.922 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 18:00:42.93 Apr 18 18:00:42.935: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-5437" to be "running" Apr 18 18:00:42.939: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.07212ms Apr 18 18:00:44.942: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.006835775s Apr 18 18:00:44.942: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 18:00:44.946 STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. 04/18/24 18:00:44.955 [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes test/e2e/scheduling/predicates.go:748 [AfterEach] PodTopologySpread Filtering test/e2e/scheduling/predicates.go:742 STEP: removing the label kubernetes.io/e2e-pts-filter off the node v126-worker 04/18/24 18:00:46.992 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter 04/18/24 18:00:47.006 STEP: removing the label kubernetes.io/e2e-pts-filter off the node v126-worker2 04/18/24 18:00:47.009 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter 04/18/24 18:00:47.024 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 18:00:47.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-5437" for this suite. 04/18/24 18:00:47.033 ------------------------------ • [SLOW TEST] [6.209 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering test/e2e/scheduling/predicates.go:727 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes test/e2e/scheduling/predicates.go:748 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:00:40.828 Apr 18 18:00:40.828: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 18:00:40.83 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:00:40.841 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:00:40.845 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 18:00:40.849: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 18:00:40.865: INFO: Waiting for terminating namespaces to be deleted... Apr 18 18:00:40.873: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 18:00:40.886: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:00:40.886: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:00:40.886: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:40.886: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:00:40.886: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:40.886: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 18:00:40.886: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 18:00:40.892: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:00:40.892: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:00:40.892: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:40.892: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:00:40.892: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:40.892: INFO: Container kube-proxy ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering test/e2e/scheduling/predicates.go:731 STEP: Trying to get 2 available nodes which can run pod 04/18/24 18:00:40.892 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 18:00:40.893 Apr 18 18:00:40.908: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-5437" to be "running" Apr 18 18:00:40.914: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 5.952384ms Apr 18 18:00:42.919: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.010919368s Apr 18 18:00:42.919: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 18:00:42.922 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 18:00:42.93 Apr 18 18:00:42.935: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-5437" to be "running" Apr 18 18:00:42.939: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.07212ms Apr 18 18:00:44.942: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.006835775s Apr 18 18:00:44.942: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 18:00:44.946 STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. 04/18/24 18:00:44.955 [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes test/e2e/scheduling/predicates.go:748 [AfterEach] PodTopologySpread Filtering test/e2e/scheduling/predicates.go:742 STEP: removing the label kubernetes.io/e2e-pts-filter off the node v126-worker 04/18/24 18:00:46.992 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter 04/18/24 18:00:47.006 STEP: removing the label kubernetes.io/e2e-pts-filter off the node v126-worker2 04/18/24 18:00:47.009 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter 04/18/24 18:00:47.024 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 18:00:47.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-5437" for this suite. 04/18/24 18:00:47.033 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for test/e2e/scheduling/predicates.go:276 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:00:47.078 Apr 18 18:00:47.078: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 18:00:47.08 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:00:47.091 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:00:47.095 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 18:00:47.098: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 18:00:47.105: INFO: Waiting for terminating namespaces to be deleted... Apr 18 18:00:47.107: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 18:00:47.112: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:00:47.112: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:00:47.112: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:47.112: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:00:47.112: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:47.112: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 18:00:47.112: INFO: rs-e2e-pts-filter-9bz79 from sched-pred-5437 started at 2024-04-18 18:00:44 +0000 UTC (1 container statuses recorded) Apr 18 18:00:47.112: INFO: Container e2e-pts-filter ready: true, restart count 0 Apr 18 18:00:47.112: INFO: rs-e2e-pts-filter-kz55v from sched-pred-5437 started at 2024-04-18 18:00:44 +0000 UTC (1 container statuses recorded) Apr 18 18:00:47.112: INFO: Container e2e-pts-filter ready: true, restart count 0 Apr 18 18:00:47.112: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 18:00:47.118: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:00:47.118: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:00:47.118: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:47.118: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:00:47.118: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:47.118: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 18:00:47.118: INFO: rs-e2e-pts-filter-m9bs8 from sched-pred-5437 started at 2024-04-18 18:00:45 +0000 UTC (1 container statuses recorded) Apr 18 18:00:47.118: INFO: Container e2e-pts-filter ready: true, restart count 0 Apr 18 18:00:47.118: INFO: rs-e2e-pts-filter-sdc9x from sched-pred-5437 started at 2024-04-18 18:00:44 +0000 UTC (1 container statuses recorded) Apr 18 18:00:47.118: INFO: Container e2e-pts-filter ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run test/e2e/scheduling/predicates.go:221 STEP: Add RuntimeClass and fake resource 04/18/24 18:00:47.125 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 18:00:47.125 Apr 18 18:00:47.131: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-934" to be "running" Apr 18 18:00:47.134: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.714104ms Apr 18 18:00:49.138: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.006949128s Apr 18 18:00:49.138: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 18:00:49.142 [It] verify pod overhead is accounted for test/e2e/scheduling/predicates.go:276 STEP: Starting Pod to consume most of the node's resource. 04/18/24 18:00:49.166 Apr 18 18:00:49.171: INFO: Waiting up to 5m0s for pod "filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd" in namespace "sched-pred-934" to be "running" Apr 18 18:00:49.174: INFO: Pod "filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.836818ms Apr 18 18:00:51.178: INFO: Pod "filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006899995s Apr 18 18:00:53.178: INFO: Pod "filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007059149s Apr 18 18:00:55.179: INFO: Pod "filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.00787422s Apr 18 18:00:57.181: INFO: Pod "filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd": Phase="Running", Reason="", readiness=true. Elapsed: 8.010060435s Apr 18 18:00:57.181: INFO: Pod "filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd" satisfied condition "running" STEP: Creating another pod that requires unavailable amount of resources. 04/18/24 18:00:57.181 STEP: Considering event: Type = [Warning], Name = [filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd.17c771bf2e96bd8a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient example.com/beardsecond. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod..] 04/18/24 18:00:57.186 STEP: Considering event: Type = [Normal], Name = [filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd.17c771c0a38fe3bc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-934/filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd to v126-worker] 04/18/24 18:00:57.186 STEP: Considering event: Type = [Normal], Name = [filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd.17c771c0c744a869], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 18:00:57.186 STEP: Considering event: Type = [Normal], Name = [filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd.17c771c0c819145c], Reason = [Created], Message = [Created container filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd] 04/18/24 18:00:57.186 STEP: Considering event: Type = [Normal], Name = [filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd.17c771c0d861c043], Reason = [Started], Message = [Started container filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd] 04/18/24 18:00:57.186 STEP: Considering event: Type = [Normal], Name = [without-label.17c771beb53ed056], Reason = [Scheduled], Message = [Successfully assigned sched-pred-934/without-label to v126-worker] 04/18/24 18:00:57.186 STEP: Considering event: Type = [Normal], Name = [without-label.17c771bed6f53e94], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 18:00:57.187 STEP: Considering event: Type = [Normal], Name = [without-label.17c771bed7c2e58f], Reason = [Created], Message = [Created container without-label] 04/18/24 18:00:57.187 STEP: Considering event: Type = [Normal], Name = [without-label.17c771bee6f1628f], Reason = [Started], Message = [Started container without-label] 04/18/24 18:00:57.187 STEP: Considering event: Type = [Normal], Name = [without-label.17c771bf900567bd], Reason = [Killing], Message = [Stopping container without-label] 04/18/24 18:00:57.187 STEP: Considering event: Type = [Warning], Name = [additional-podb7d9f6dd-8c1d-4824-a5a8-18b5431bae8d.17c771c10cc01b17], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient example.com/beardsecond. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod..] 04/18/24 18:00:57.198 [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run test/e2e/scheduling/predicates.go:256 STEP: Remove fake resource and RuntimeClass 04/18/24 18:00:58.199 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 18:00:58.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-934" for this suite. 04/18/24 18:00:58.22 ------------------------------ • [SLOW TEST] [11.147 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run test/e2e/scheduling/predicates.go:216 verify pod overhead is accounted for test/e2e/scheduling/predicates.go:276 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:00:47.078 Apr 18 18:00:47.078: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 18:00:47.08 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:00:47.091 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:00:47.095 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 18:00:47.098: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 18:00:47.105: INFO: Waiting for terminating namespaces to be deleted... Apr 18 18:00:47.107: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 18:00:47.112: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:00:47.112: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:00:47.112: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:47.112: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:00:47.112: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:47.112: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 18:00:47.112: INFO: rs-e2e-pts-filter-9bz79 from sched-pred-5437 started at 2024-04-18 18:00:44 +0000 UTC (1 container statuses recorded) Apr 18 18:00:47.112: INFO: Container e2e-pts-filter ready: true, restart count 0 Apr 18 18:00:47.112: INFO: rs-e2e-pts-filter-kz55v from sched-pred-5437 started at 2024-04-18 18:00:44 +0000 UTC (1 container statuses recorded) Apr 18 18:00:47.112: INFO: Container e2e-pts-filter ready: true, restart count 0 Apr 18 18:00:47.112: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 18:00:47.118: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:00:47.118: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:00:47.118: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:47.118: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:00:47.118: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:47.118: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 18:00:47.118: INFO: rs-e2e-pts-filter-m9bs8 from sched-pred-5437 started at 2024-04-18 18:00:45 +0000 UTC (1 container statuses recorded) Apr 18 18:00:47.118: INFO: Container e2e-pts-filter ready: true, restart count 0 Apr 18 18:00:47.118: INFO: rs-e2e-pts-filter-sdc9x from sched-pred-5437 started at 2024-04-18 18:00:44 +0000 UTC (1 container statuses recorded) Apr 18 18:00:47.118: INFO: Container e2e-pts-filter ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run test/e2e/scheduling/predicates.go:221 STEP: Add RuntimeClass and fake resource 04/18/24 18:00:47.125 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 18:00:47.125 Apr 18 18:00:47.131: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-934" to be "running" Apr 18 18:00:47.134: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.714104ms Apr 18 18:00:49.138: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.006949128s Apr 18 18:00:49.138: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 18:00:49.142 [It] verify pod overhead is accounted for test/e2e/scheduling/predicates.go:276 STEP: Starting Pod to consume most of the node's resource. 04/18/24 18:00:49.166 Apr 18 18:00:49.171: INFO: Waiting up to 5m0s for pod "filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd" in namespace "sched-pred-934" to be "running" Apr 18 18:00:49.174: INFO: Pod "filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.836818ms Apr 18 18:00:51.178: INFO: Pod "filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006899995s Apr 18 18:00:53.178: INFO: Pod "filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007059149s Apr 18 18:00:55.179: INFO: Pod "filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.00787422s Apr 18 18:00:57.181: INFO: Pod "filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd": Phase="Running", Reason="", readiness=true. Elapsed: 8.010060435s Apr 18 18:00:57.181: INFO: Pod "filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd" satisfied condition "running" STEP: Creating another pod that requires unavailable amount of resources. 04/18/24 18:00:57.181 STEP: Considering event: Type = [Warning], Name = [filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd.17c771bf2e96bd8a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient example.com/beardsecond. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod..] 04/18/24 18:00:57.186 STEP: Considering event: Type = [Normal], Name = [filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd.17c771c0a38fe3bc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-934/filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd to v126-worker] 04/18/24 18:00:57.186 STEP: Considering event: Type = [Normal], Name = [filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd.17c771c0c744a869], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 18:00:57.186 STEP: Considering event: Type = [Normal], Name = [filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd.17c771c0c819145c], Reason = [Created], Message = [Created container filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd] 04/18/24 18:00:57.186 STEP: Considering event: Type = [Normal], Name = [filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd.17c771c0d861c043], Reason = [Started], Message = [Started container filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd] 04/18/24 18:00:57.186 STEP: Considering event: Type = [Normal], Name = [without-label.17c771beb53ed056], Reason = [Scheduled], Message = [Successfully assigned sched-pred-934/without-label to v126-worker] 04/18/24 18:00:57.186 STEP: Considering event: Type = [Normal], Name = [without-label.17c771bed6f53e94], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 18:00:57.187 STEP: Considering event: Type = [Normal], Name = [without-label.17c771bed7c2e58f], Reason = [Created], Message = [Created container without-label] 04/18/24 18:00:57.187 STEP: Considering event: Type = [Normal], Name = [without-label.17c771bee6f1628f], Reason = [Started], Message = [Started container without-label] 04/18/24 18:00:57.187 STEP: Considering event: Type = [Normal], Name = [without-label.17c771bf900567bd], Reason = [Killing], Message = [Stopping container without-label] 04/18/24 18:00:57.187 STEP: Considering event: Type = [Warning], Name = [additional-podb7d9f6dd-8c1d-4824-a5a8-18b5431bae8d.17c771c10cc01b17], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient example.com/beardsecond. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod..] 04/18/24 18:00:57.198 [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run test/e2e/scheduling/predicates.go:256 STEP: Remove fake resource and RuntimeClass 04/18/24 18:00:58.199 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 18:00:58.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-934" for this suite. 04/18/24 18:00:58.22 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol test/e2e/scheduling/predicates.go:665 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:00:58.252 Apr 18 18:00:58.252: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 18:00:58.254 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:00:58.265 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:00:58.269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 18:00:58.273: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 18:00:58.282: INFO: Waiting for terminating namespaces to be deleted... Apr 18 18:00:58.285: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 18:00:58.291: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:00:58.291: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:00:58.291: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:58.291: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:00:58.291: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:58.291: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 18:00:58.291: INFO: filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd from sched-pred-934 started at 2024-04-18 18:00:55 +0000 UTC (1 container statuses recorded) Apr 18 18:00:58.291: INFO: Container filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd ready: true, restart count 0 Apr 18 18:00:58.291: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 18:00:58.297: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:00:58.297: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:00:58.297: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:58.297: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:00:58.297: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:58.297: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol test/e2e/scheduling/predicates.go:665 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 18:00:58.297 Apr 18 18:00:58.305: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-5734" to be "running" Apr 18 18:00:58.308: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.195782ms Apr 18 18:01:00.313: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.00795438s Apr 18 18:01:00.313: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 18:01:00.326 STEP: Trying to apply a random label on the found node. 04/18/24 18:01:00.338 STEP: verifying the node has the label kubernetes.io/e2e-b727a550-ca0f-44f1-b15f-ffde854f2a92 90 04/18/24 18:01:00.349 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled 04/18/24 18:01:00.352 Apr 18 18:01:00.358: INFO: Waiting up to 5m0s for pod "pod1" in namespace "sched-pred-5734" to be "not pending" Apr 18 18:01:00.361: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.087014ms Apr 18 18:01:02.366: INFO: Pod "pod1": Phase="Running", Reason="", readiness=false. Elapsed: 2.00778107s Apr 18 18:01:02.366: INFO: Pod "pod1" satisfied condition "not pending" STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.22.0.4 on the node which pod1 resides and expect scheduled 04/18/24 18:01:02.366 Apr 18 18:01:02.373: INFO: Waiting up to 5m0s for pod "pod2" in namespace "sched-pred-5734" to be "not pending" Apr 18 18:01:02.376: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.841705ms Apr 18 18:01:04.380: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007477268s Apr 18 18:01:04.380: INFO: Pod "pod2" satisfied condition "not pending" STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.22.0.4 but use UDP protocol on the node which pod2 resides 04/18/24 18:01:04.38 Apr 18 18:01:04.386: INFO: Waiting up to 5m0s for pod "pod3" in namespace "sched-pred-5734" to be "not pending" Apr 18 18:01:04.389: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.135577ms Apr 18 18:01:06.393: INFO: Pod "pod3": Phase="Running", Reason="", readiness=false. Elapsed: 2.007171383s Apr 18 18:01:06.393: INFO: Pod "pod3" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-b727a550-ca0f-44f1-b15f-ffde854f2a92 off the node v126-worker 04/18/24 18:01:06.393 STEP: verifying the node doesn't have the label kubernetes.io/e2e-b727a550-ca0f-44f1-b15f-ffde854f2a92 04/18/24 18:01:06.406 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 18:01:06.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-5734" for this suite. 04/18/24 18:01:06.415 ------------------------------ • [SLOW TEST] [8.168 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol test/e2e/scheduling/predicates.go:665 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:00:58.252 Apr 18 18:00:58.252: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 18:00:58.254 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:00:58.265 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:00:58.269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 18:00:58.273: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 18:00:58.282: INFO: Waiting for terminating namespaces to be deleted... Apr 18 18:00:58.285: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 18:00:58.291: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:00:58.291: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:00:58.291: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:58.291: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:00:58.291: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:58.291: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 18:00:58.291: INFO: filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd from sched-pred-934 started at 2024-04-18 18:00:55 +0000 UTC (1 container statuses recorded) Apr 18 18:00:58.291: INFO: Container filler-pod-3a82b042-50df-43aa-a555-3c4eb97ca2fd ready: true, restart count 0 Apr 18 18:00:58.291: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 18:00:58.297: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:00:58.297: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:00:58.297: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:58.297: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:00:58.297: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:00:58.297: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol test/e2e/scheduling/predicates.go:665 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 18:00:58.297 Apr 18 18:00:58.305: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-5734" to be "running" Apr 18 18:00:58.308: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.195782ms Apr 18 18:01:00.313: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.00795438s Apr 18 18:01:00.313: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 18:01:00.326 STEP: Trying to apply a random label on the found node. 04/18/24 18:01:00.338 STEP: verifying the node has the label kubernetes.io/e2e-b727a550-ca0f-44f1-b15f-ffde854f2a92 90 04/18/24 18:01:00.349 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled 04/18/24 18:01:00.352 Apr 18 18:01:00.358: INFO: Waiting up to 5m0s for pod "pod1" in namespace "sched-pred-5734" to be "not pending" Apr 18 18:01:00.361: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.087014ms Apr 18 18:01:02.366: INFO: Pod "pod1": Phase="Running", Reason="", readiness=false. Elapsed: 2.00778107s Apr 18 18:01:02.366: INFO: Pod "pod1" satisfied condition "not pending" STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.22.0.4 on the node which pod1 resides and expect scheduled 04/18/24 18:01:02.366 Apr 18 18:01:02.373: INFO: Waiting up to 5m0s for pod "pod2" in namespace "sched-pred-5734" to be "not pending" Apr 18 18:01:02.376: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.841705ms Apr 18 18:01:04.380: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007477268s Apr 18 18:01:04.380: INFO: Pod "pod2" satisfied condition "not pending" STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.22.0.4 but use UDP protocol on the node which pod2 resides 04/18/24 18:01:04.38 Apr 18 18:01:04.386: INFO: Waiting up to 5m0s for pod "pod3" in namespace "sched-pred-5734" to be "not pending" Apr 18 18:01:04.389: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.135577ms Apr 18 18:01:06.393: INFO: Pod "pod3": Phase="Running", Reason="", readiness=false. Elapsed: 2.007171383s Apr 18 18:01:06.393: INFO: Pod "pod3" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-b727a550-ca0f-44f1-b15f-ffde854f2a92 off the node v126-worker 04/18/24 18:01:06.393 STEP: verifying the node doesn't have the label kubernetes.io/e2e-b727a550-ca0f-44f1-b15f-ffde854f2a92 04/18/24 18:01:06.406 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 18:01:06.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-5734" for this suite. 04/18/24 18:01:06.415 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching test/e2e/scheduling/predicates.go:498 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:01:06.45 Apr 18 18:01:06.450: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 18:01:06.452 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:01:06.463 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:01:06.467 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 18:01:06.470: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 18:01:06.478: INFO: Waiting for terminating namespaces to be deleted... Apr 18 18:01:06.481: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 18:01:06.488: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:01:06.488: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:01:06.488: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:01:06.488: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:01:06.488: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:01:06.488: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 18:01:06.488: INFO: pod1 from sched-pred-5734 started at 2024-04-18 18:01:00 +0000 UTC (1 container statuses recorded) Apr 18 18:01:06.488: INFO: Container agnhost ready: true, restart count 0 Apr 18 18:01:06.488: INFO: pod2 from sched-pred-5734 started at 2024-04-18 18:01:02 +0000 UTC (1 container statuses recorded) Apr 18 18:01:06.488: INFO: Container agnhost ready: true, restart count 0 Apr 18 18:01:06.488: INFO: pod3 from sched-pred-5734 started at 2024-04-18 18:01:04 +0000 UTC (1 container statuses recorded) Apr 18 18:01:06.488: INFO: Container agnhost ready: false, restart count 0 Apr 18 18:01:06.488: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 18:01:06.494: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:01:06.494: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:01:06.494: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:01:06.494: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:01:06.494: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:01:06.494: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching test/e2e/scheduling/predicates.go:498 STEP: Trying to schedule Pod with nonempty NodeSelector. 04/18/24 18:01:06.494 STEP: Considering event: Type = [Warning], Name = [restricted-pod.17c771c3384bc4b3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 04/18/24 18:01:06.518 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 18:01:07.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-9229" for this suite. 04/18/24 18:01:07.524 ------------------------------ • [1.079 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that NodeAffinity is respected if not matching test/e2e/scheduling/predicates.go:498 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:01:06.45 Apr 18 18:01:06.450: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 18:01:06.452 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:01:06.463 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:01:06.467 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 18:01:06.470: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 18:01:06.478: INFO: Waiting for terminating namespaces to be deleted... Apr 18 18:01:06.481: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 18:01:06.488: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:01:06.488: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:01:06.488: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:01:06.488: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:01:06.488: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:01:06.488: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 18:01:06.488: INFO: pod1 from sched-pred-5734 started at 2024-04-18 18:01:00 +0000 UTC (1 container statuses recorded) Apr 18 18:01:06.488: INFO: Container agnhost ready: true, restart count 0 Apr 18 18:01:06.488: INFO: pod2 from sched-pred-5734 started at 2024-04-18 18:01:02 +0000 UTC (1 container statuses recorded) Apr 18 18:01:06.488: INFO: Container agnhost ready: true, restart count 0 Apr 18 18:01:06.488: INFO: pod3 from sched-pred-5734 started at 2024-04-18 18:01:04 +0000 UTC (1 container statuses recorded) Apr 18 18:01:06.488: INFO: Container agnhost ready: false, restart count 0 Apr 18 18:01:06.488: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 18:01:06.494: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:01:06.494: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:01:06.494: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:01:06.494: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:01:06.494: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:01:06.494: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching test/e2e/scheduling/predicates.go:498 STEP: Trying to schedule Pod with nonempty NodeSelector. 04/18/24 18:01:06.494 STEP: Considering event: Type = [Warning], Name = [restricted-pod.17c771c3384bc4b3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 04/18/24 18:01:06.518 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 18:01:07.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-9229" for this suite. 04/18/24 18:01:07.524 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms test/e2e/scheduling/priorities.go:124 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:01:07.535 Apr 18 18:01:07.535: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 04/18/24 18:01:07.537 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:01:07.548 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:01:07.552 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 Apr 18 18:01:07.556: INFO: Waiting up to 1m0s for all nodes to be ready Apr 18 18:02:07.582: INFO: Waiting for terminating namespaces to be deleted... Apr 18 18:02:07.585: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 18 18:02:07.600: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 18 18:02:07.600: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 18 18:02:07.608: INFO: ComputeCPUMemFraction for node: v126-worker Apr 18 18:02:07.608: INFO: Pod for on the node: create-loop-devs-w9ldx, Cpu: 100, Mem: 209715200 Apr 18 18:02:07.608: INFO: Pod for on the node: kindnet-68nxx, Cpu: 100, Mem: 52428800 Apr 18 18:02:07.608: INFO: Pod for on the node: kube-proxy-4wtz6, Cpu: 100, Mem: 209715200 Apr 18 18:02:07.608: INFO: Node: v126-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 18:02:07.608: INFO: Node: v126-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 Apr 18 18:02:07.608: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 18 18:02:07.608: INFO: Pod for on the node: create-loop-devs-xnxkn, Cpu: 100, Mem: 209715200 Apr 18 18:02:07.608: INFO: Pod for on the node: kindnet-wqc6h, Cpu: 100, Mem: 52428800 Apr 18 18:02:07.608: INFO: Pod for on the node: kube-proxy-hjqqd, Cpu: 100, Mem: 209715200 Apr 18 18:02:07.608: INFO: Node: v126-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 18:02:07.608: INFO: Node: v126-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms test/e2e/scheduling/priorities.go:124 STEP: Trying to launch a pod with a label to get a node which can launch it. 04/18/24 18:02:07.608 Apr 18 18:02:07.619: INFO: Waiting up to 1m0s for pod "pod-with-label-security-s1" in namespace "sched-priority-9346" to be "running" Apr 18 18:02:07.622: INFO: Pod "pod-with-label-security-s1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.090933ms Apr 18 18:02:09.627: INFO: Pod "pod-with-label-security-s1": Phase="Running", Reason="", readiness=true. Elapsed: 2.008014217s Apr 18 18:02:09.627: INFO: Pod "pod-with-label-security-s1" satisfied condition "running" STEP: Verifying the node has a label kubernetes.io/hostname 04/18/24 18:02:09.63 Apr 18 18:02:09.643: INFO: ComputeCPUMemFraction for node: v126-worker Apr 18 18:02:09.643: INFO: Pod for on the node: create-loop-devs-w9ldx, Cpu: 100, Mem: 209715200 Apr 18 18:02:09.643: INFO: Pod for on the node: kindnet-68nxx, Cpu: 100, Mem: 52428800 Apr 18 18:02:09.643: INFO: Pod for on the node: kube-proxy-4wtz6, Cpu: 100, Mem: 209715200 Apr 18 18:02:09.643: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Apr 18 18:02:09.643: INFO: Node: v126-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 18:02:09.643: INFO: Node: v126-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 Apr 18 18:02:09.643: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 18 18:02:09.643: INFO: Pod for on the node: create-loop-devs-xnxkn, Cpu: 100, Mem: 209715200 Apr 18 18:02:09.643: INFO: Pod for on the node: kindnet-wqc6h, Cpu: 100, Mem: 52428800 Apr 18 18:02:09.643: INFO: Pod for on the node: kube-proxy-hjqqd, Cpu: 100, Mem: 209715200 Apr 18 18:02:09.643: INFO: Node: v126-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 18:02:09.643: INFO: Node: v126-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 Apr 18 18:02:09.649: INFO: Waiting for running... Apr 18 18:02:09.650: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. 04/18/24 18:02:14.712 Apr 18 18:02:14.713: INFO: ComputeCPUMemFraction for node: v126-worker Apr 18 18:02:14.713: INFO: Pod for on the node: create-loop-devs-w9ldx, Cpu: 100, Mem: 209715200 Apr 18 18:02:14.713: INFO: Pod for on the node: kindnet-68nxx, Cpu: 100, Mem: 52428800 Apr 18 18:02:14.713: INFO: Pod for on the node: kube-proxy-4wtz6, Cpu: 100, Mem: 209715200 Apr 18 18:02:14.713: INFO: Pod for on the node: dbd5a1aa-fece-42db-98e7-0ae65d23650a-0, Cpu: 52599, Mem: 40302548582 Apr 18 18:02:14.713: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Apr 18 18:02:14.713: INFO: Node: v126-worker, totalRequestedCPUResource: 52799, cpuAllocatableMil: 88000, cpuFraction: 0.5999886363636364 Apr 18 18:02:14.713: INFO: Node: v126-worker, totalRequestedMemResource: 40459834982, memAllocatableVal: 67412086784, memFraction: 0.6001866566101168 STEP: Compute Cpu, Mem Fraction after create balanced pods. 04/18/24 18:02:14.713 Apr 18 18:02:14.713: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 18 18:02:14.713: INFO: Pod for on the node: create-loop-devs-xnxkn, Cpu: 100, Mem: 209715200 Apr 18 18:02:14.713: INFO: Pod for on the node: kindnet-wqc6h, Cpu: 100, Mem: 52428800 Apr 18 18:02:14.713: INFO: Pod for on the node: kube-proxy-hjqqd, Cpu: 100, Mem: 209715200 Apr 18 18:02:14.713: INFO: Pod for on the node: 4d478762-2085-4035-bb41-be936b8f68f6-0, Cpu: 52599, Mem: 40302548582 Apr 18 18:02:14.713: INFO: Node: v126-worker2, totalRequestedCPUResource: 52799, cpuAllocatableMil: 88000, cpuFraction: 0.5999886363636364 Apr 18 18:02:14.713: INFO: Node: v126-worker2, totalRequestedMemResource: 40459834982, memAllocatableVal: 67412086784, memFraction: 0.6001866566101168 STEP: Trying to launch the pod with podAntiAffinity. 04/18/24 18:02:14.713 STEP: Wait the pod becomes running 04/18/24 18:02:14.72 Apr 18 18:02:14.720: INFO: Waiting up to 5m0s for pod "pod-with-pod-antiaffinity" in namespace "sched-priority-9346" to be "running" Apr 18 18:02:14.723: INFO: Pod "pod-with-pod-antiaffinity": Phase="Pending", Reason="", readiness=false. Elapsed: 3.550814ms Apr 18 18:02:16.728: INFO: Pod "pod-with-pod-antiaffinity": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007818411s Apr 18 18:02:18.729: INFO: Pod "pod-with-pod-antiaffinity": Phase="Running", Reason="", readiness=true. Elapsed: 4.008974701s Apr 18 18:02:18.729: INFO: Pod "pod-with-pod-antiaffinity" satisfied condition "running" STEP: Verify the pod was scheduled to the expected node. 04/18/24 18:02:18.732 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 18:02:20.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-priority-9346" for this suite. 04/18/24 18:02:20.759 ------------------------------ • [SLOW TEST] [73.230 seconds] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms test/e2e/scheduling/priorities.go:124 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:01:07.535 Apr 18 18:01:07.535: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 04/18/24 18:01:07.537 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:01:07.548 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:01:07.552 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 Apr 18 18:01:07.556: INFO: Waiting up to 1m0s for all nodes to be ready Apr 18 18:02:07.582: INFO: Waiting for terminating namespaces to be deleted... Apr 18 18:02:07.585: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 18 18:02:07.600: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 18 18:02:07.600: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 18 18:02:07.608: INFO: ComputeCPUMemFraction for node: v126-worker Apr 18 18:02:07.608: INFO: Pod for on the node: create-loop-devs-w9ldx, Cpu: 100, Mem: 209715200 Apr 18 18:02:07.608: INFO: Pod for on the node: kindnet-68nxx, Cpu: 100, Mem: 52428800 Apr 18 18:02:07.608: INFO: Pod for on the node: kube-proxy-4wtz6, Cpu: 100, Mem: 209715200 Apr 18 18:02:07.608: INFO: Node: v126-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 18:02:07.608: INFO: Node: v126-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 Apr 18 18:02:07.608: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 18 18:02:07.608: INFO: Pod for on the node: create-loop-devs-xnxkn, Cpu: 100, Mem: 209715200 Apr 18 18:02:07.608: INFO: Pod for on the node: kindnet-wqc6h, Cpu: 100, Mem: 52428800 Apr 18 18:02:07.608: INFO: Pod for on the node: kube-proxy-hjqqd, Cpu: 100, Mem: 209715200 Apr 18 18:02:07.608: INFO: Node: v126-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 18:02:07.608: INFO: Node: v126-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms test/e2e/scheduling/priorities.go:124 STEP: Trying to launch a pod with a label to get a node which can launch it. 04/18/24 18:02:07.608 Apr 18 18:02:07.619: INFO: Waiting up to 1m0s for pod "pod-with-label-security-s1" in namespace "sched-priority-9346" to be "running" Apr 18 18:02:07.622: INFO: Pod "pod-with-label-security-s1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.090933ms Apr 18 18:02:09.627: INFO: Pod "pod-with-label-security-s1": Phase="Running", Reason="", readiness=true. Elapsed: 2.008014217s Apr 18 18:02:09.627: INFO: Pod "pod-with-label-security-s1" satisfied condition "running" STEP: Verifying the node has a label kubernetes.io/hostname 04/18/24 18:02:09.63 Apr 18 18:02:09.643: INFO: ComputeCPUMemFraction for node: v126-worker Apr 18 18:02:09.643: INFO: Pod for on the node: create-loop-devs-w9ldx, Cpu: 100, Mem: 209715200 Apr 18 18:02:09.643: INFO: Pod for on the node: kindnet-68nxx, Cpu: 100, Mem: 52428800 Apr 18 18:02:09.643: INFO: Pod for on the node: kube-proxy-4wtz6, Cpu: 100, Mem: 209715200 Apr 18 18:02:09.643: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Apr 18 18:02:09.643: INFO: Node: v126-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 18:02:09.643: INFO: Node: v126-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 Apr 18 18:02:09.643: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 18 18:02:09.643: INFO: Pod for on the node: create-loop-devs-xnxkn, Cpu: 100, Mem: 209715200 Apr 18 18:02:09.643: INFO: Pod for on the node: kindnet-wqc6h, Cpu: 100, Mem: 52428800 Apr 18 18:02:09.643: INFO: Pod for on the node: kube-proxy-hjqqd, Cpu: 100, Mem: 209715200 Apr 18 18:02:09.643: INFO: Node: v126-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 18 18:02:09.643: INFO: Node: v126-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 Apr 18 18:02:09.649: INFO: Waiting for running... Apr 18 18:02:09.650: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. 04/18/24 18:02:14.712 Apr 18 18:02:14.713: INFO: ComputeCPUMemFraction for node: v126-worker Apr 18 18:02:14.713: INFO: Pod for on the node: create-loop-devs-w9ldx, Cpu: 100, Mem: 209715200 Apr 18 18:02:14.713: INFO: Pod for on the node: kindnet-68nxx, Cpu: 100, Mem: 52428800 Apr 18 18:02:14.713: INFO: Pod for on the node: kube-proxy-4wtz6, Cpu: 100, Mem: 209715200 Apr 18 18:02:14.713: INFO: Pod for on the node: dbd5a1aa-fece-42db-98e7-0ae65d23650a-0, Cpu: 52599, Mem: 40302548582 Apr 18 18:02:14.713: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Apr 18 18:02:14.713: INFO: Node: v126-worker, totalRequestedCPUResource: 52799, cpuAllocatableMil: 88000, cpuFraction: 0.5999886363636364 Apr 18 18:02:14.713: INFO: Node: v126-worker, totalRequestedMemResource: 40459834982, memAllocatableVal: 67412086784, memFraction: 0.6001866566101168 STEP: Compute Cpu, Mem Fraction after create balanced pods. 04/18/24 18:02:14.713 Apr 18 18:02:14.713: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 18 18:02:14.713: INFO: Pod for on the node: create-loop-devs-xnxkn, Cpu: 100, Mem: 209715200 Apr 18 18:02:14.713: INFO: Pod for on the node: kindnet-wqc6h, Cpu: 100, Mem: 52428800 Apr 18 18:02:14.713: INFO: Pod for on the node: kube-proxy-hjqqd, Cpu: 100, Mem: 209715200 Apr 18 18:02:14.713: INFO: Pod for on the node: 4d478762-2085-4035-bb41-be936b8f68f6-0, Cpu: 52599, Mem: 40302548582 Apr 18 18:02:14.713: INFO: Node: v126-worker2, totalRequestedCPUResource: 52799, cpuAllocatableMil: 88000, cpuFraction: 0.5999886363636364 Apr 18 18:02:14.713: INFO: Node: v126-worker2, totalRequestedMemResource: 40459834982, memAllocatableVal: 67412086784, memFraction: 0.6001866566101168 STEP: Trying to launch the pod with podAntiAffinity. 04/18/24 18:02:14.713 STEP: Wait the pod becomes running 04/18/24 18:02:14.72 Apr 18 18:02:14.720: INFO: Waiting up to 5m0s for pod "pod-with-pod-antiaffinity" in namespace "sched-priority-9346" to be "running" Apr 18 18:02:14.723: INFO: Pod "pod-with-pod-antiaffinity": Phase="Pending", Reason="", readiness=false. Elapsed: 3.550814ms Apr 18 18:02:16.728: INFO: Pod "pod-with-pod-antiaffinity": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007818411s Apr 18 18:02:18.729: INFO: Pod "pod-with-pod-antiaffinity": Phase="Running", Reason="", readiness=true. Elapsed: 4.008974701s Apr 18 18:02:18.729: INFO: Pod "pod-with-pod-antiaffinity" satisfied condition "running" STEP: Verify the pod was scheduled to the expected node. 04/18/24 18:02:18.732 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 18:02:20.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-priority-9346" for this suite. 04/18/24 18:02:20.759 << End Captured GinkgoWriter Output ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching test/e2e/scheduling/predicates.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:02:20.765 Apr 18 18:02:20.766: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 18:02:20.767 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:02:20.792 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:02:20.796 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 18:02:20.800: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 18:02:20.809: INFO: Waiting for terminating namespaces to be deleted... Apr 18 18:02:20.813: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 18:02:20.820: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:02:20.820: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:02:20.820: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:02:20.820: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:02:20.820: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:02:20.820: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 18:02:20.820: INFO: pod-with-label-security-s1 from sched-priority-9346 started at 2024-04-18 18:02:07 +0000 UTC (1 container statuses recorded) Apr 18 18:02:20.820: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 Apr 18 18:02:20.820: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 18:02:20.827: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:02:20.827: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:02:20.827: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:02:20.827: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:02:20.827: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:02:20.827: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 18:02:20.827: INFO: pod-with-pod-antiaffinity from sched-priority-9346 started at 2024-04-18 18:02:14 +0000 UTC (1 container statuses recorded) Apr 18 18:02:20.827: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching test/e2e/scheduling/predicates.go:630 STEP: Trying to launch a pod without a toleration to get a node which can launch it. 04/18/24 18:02:20.827 Apr 18 18:02:20.836: INFO: Waiting up to 1m0s for pod "without-toleration" in namespace "sched-pred-5748" to be "running" Apr 18 18:02:20.840: INFO: Pod "without-toleration": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116401ms Apr 18 18:02:22.844: INFO: Pod "without-toleration": Phase="Running", Reason="", readiness=true. Elapsed: 2.008769405s Apr 18 18:02:22.844: INFO: Pod "without-toleration" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 18:02:22.848 STEP: Trying to apply a random taint on the found node. 04/18/24 18:02:22.857 STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-d0965fc6-a014-4896-9f9b-6f2074d48606=testing-taint-value:NoSchedule 04/18/24 18:02:22.872 STEP: Trying to apply a random label on the found node. 04/18/24 18:02:22.876 STEP: verifying the node has the label kubernetes.io/e2e-label-key-1131333c-a6f0-4a71-ae04-41f70bffb02e testing-label-value 04/18/24 18:02:22.888 STEP: Trying to relaunch the pod, still no tolerations. 04/18/24 18:02:22.892 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c771d4867ccae9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5748/without-toleration to v126-worker] 04/18/24 18:02:22.902 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c771d4aa04f278], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 18:02:22.902 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c771d4aad5f7ee], Reason = [Created], Message = [Created container without-toleration] 04/18/24 18:02:22.902 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c771d4ba4bf9ee], Reason = [Started], Message = [Started container without-toleration] 04/18/24 18:02:22.902 STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.17c771d501c51a49], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {kubernetes.io/e2e-taint-key-d0965fc6-a014-4896-9f9b-6f2074d48606: testing-taint-value}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 04/18/24 18:02:22.913 STEP: Removing taint off the node 04/18/24 18:02:23.914 STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.17c771d501c51a49], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {kubernetes.io/e2e-taint-key-d0965fc6-a014-4896-9f9b-6f2074d48606: testing-taint-value}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 04/18/24 18:02:23.918 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c771d4867ccae9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5748/without-toleration to v126-worker] 04/18/24 18:02:23.918 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c771d4aa04f278], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 18:02:23.918 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c771d4aad5f7ee], Reason = [Created], Message = [Created container without-toleration] 04/18/24 18:02:23.918 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c771d4ba4bf9ee], Reason = [Started], Message = [Started container without-toleration] 04/18/24 18:02:23.918 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-d0965fc6-a014-4896-9f9b-6f2074d48606=testing-taint-value:NoSchedule 04/18/24 18:02:23.936 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17c771d53f5e179a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5748/still-no-tolerations to v126-worker] 04/18/24 18:02:23.947 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c771d54668911b], Reason = [Killing], Message = [Stopping container without-toleration] 04/18/24 18:02:24.066 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17c771d56049712c], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 18:02:24.5 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17c771d56134de62], Reason = [Created], Message = [Created container still-no-tolerations] 04/18/24 18:02:24.514 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17c771d5707082a0], Reason = [Started], Message = [Started container still-no-tolerations] 04/18/24 18:02:24.77 STEP: removing the label kubernetes.io/e2e-label-key-1131333c-a6f0-4a71-ae04-41f70bffb02e off the node v126-worker 04/18/24 18:02:24.945 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-1131333c-a6f0-4a71-ae04-41f70bffb02e 04/18/24 18:02:24.959 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-d0965fc6-a014-4896-9f9b-6f2074d48606=testing-taint-value:NoSchedule 04/18/24 18:02:24.965 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 18:02:24.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-5748" for this suite. 04/18/24 18:02:24.973 ------------------------------ • [4.213 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching test/e2e/scheduling/predicates.go:630 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:02:20.765 Apr 18 18:02:20.766: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 18:02:20.767 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:02:20.792 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:02:20.796 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 18:02:20.800: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 18:02:20.809: INFO: Waiting for terminating namespaces to be deleted... Apr 18 18:02:20.813: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 18:02:20.820: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:02:20.820: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:02:20.820: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:02:20.820: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:02:20.820: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:02:20.820: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 18:02:20.820: INFO: pod-with-label-security-s1 from sched-priority-9346 started at 2024-04-18 18:02:07 +0000 UTC (1 container statuses recorded) Apr 18 18:02:20.820: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 Apr 18 18:02:20.820: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 18:02:20.827: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:02:20.827: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:02:20.827: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:02:20.827: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:02:20.827: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:02:20.827: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 18:02:20.827: INFO: pod-with-pod-antiaffinity from sched-priority-9346 started at 2024-04-18 18:02:14 +0000 UTC (1 container statuses recorded) Apr 18 18:02:20.827: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching test/e2e/scheduling/predicates.go:630 STEP: Trying to launch a pod without a toleration to get a node which can launch it. 04/18/24 18:02:20.827 Apr 18 18:02:20.836: INFO: Waiting up to 1m0s for pod "without-toleration" in namespace "sched-pred-5748" to be "running" Apr 18 18:02:20.840: INFO: Pod "without-toleration": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116401ms Apr 18 18:02:22.844: INFO: Pod "without-toleration": Phase="Running", Reason="", readiness=true. Elapsed: 2.008769405s Apr 18 18:02:22.844: INFO: Pod "without-toleration" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 18:02:22.848 STEP: Trying to apply a random taint on the found node. 04/18/24 18:02:22.857 STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-d0965fc6-a014-4896-9f9b-6f2074d48606=testing-taint-value:NoSchedule 04/18/24 18:02:22.872 STEP: Trying to apply a random label on the found node. 04/18/24 18:02:22.876 STEP: verifying the node has the label kubernetes.io/e2e-label-key-1131333c-a6f0-4a71-ae04-41f70bffb02e testing-label-value 04/18/24 18:02:22.888 STEP: Trying to relaunch the pod, still no tolerations. 04/18/24 18:02:22.892 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c771d4867ccae9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5748/without-toleration to v126-worker] 04/18/24 18:02:22.902 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c771d4aa04f278], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 18:02:22.902 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c771d4aad5f7ee], Reason = [Created], Message = [Created container without-toleration] 04/18/24 18:02:22.902 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c771d4ba4bf9ee], Reason = [Started], Message = [Started container without-toleration] 04/18/24 18:02:22.902 STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.17c771d501c51a49], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {kubernetes.io/e2e-taint-key-d0965fc6-a014-4896-9f9b-6f2074d48606: testing-taint-value}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 04/18/24 18:02:22.913 STEP: Removing taint off the node 04/18/24 18:02:23.914 STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.17c771d501c51a49], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {kubernetes.io/e2e-taint-key-d0965fc6-a014-4896-9f9b-6f2074d48606: testing-taint-value}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 04/18/24 18:02:23.918 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c771d4867ccae9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5748/without-toleration to v126-worker] 04/18/24 18:02:23.918 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c771d4aa04f278], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 18:02:23.918 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c771d4aad5f7ee], Reason = [Created], Message = [Created container without-toleration] 04/18/24 18:02:23.918 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c771d4ba4bf9ee], Reason = [Started], Message = [Started container without-toleration] 04/18/24 18:02:23.918 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-d0965fc6-a014-4896-9f9b-6f2074d48606=testing-taint-value:NoSchedule 04/18/24 18:02:23.936 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17c771d53f5e179a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5748/still-no-tolerations to v126-worker] 04/18/24 18:02:23.947 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c771d54668911b], Reason = [Killing], Message = [Stopping container without-toleration] 04/18/24 18:02:24.066 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17c771d56049712c], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 18:02:24.5 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17c771d56134de62], Reason = [Created], Message = [Created container still-no-tolerations] 04/18/24 18:02:24.514 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17c771d5707082a0], Reason = [Started], Message = [Started container still-no-tolerations] 04/18/24 18:02:24.77 STEP: removing the label kubernetes.io/e2e-label-key-1131333c-a6f0-4a71-ae04-41f70bffb02e off the node v126-worker 04/18/24 18:02:24.945 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-1131333c-a6f0-4a71-ae04-41f70bffb02e 04/18/24 18:02:24.959 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-d0965fc6-a014-4896-9f9b-6f2074d48606=testing-taint-value:NoSchedule 04/18/24 18:02:24.965 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 18:02:24.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-5748" for this suite. 04/18/24 18:02:24.973 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching test/e2e/scheduling/predicates.go:587 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:02:25.017 Apr 18 18:02:25.017: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 18:02:25.019 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:02:25.031 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:02:25.035 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 18:02:25.039: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 18:02:25.047: INFO: Waiting for terminating namespaces to be deleted... Apr 18 18:02:25.051: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 18:02:25.057: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:02:25.057: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:02:25.057: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:02:25.058: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:02:25.058: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:02:25.058: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 18:02:25.058: INFO: still-no-tolerations from sched-pred-5748 started at 2024-04-18 18:02:23 +0000 UTC (1 container statuses recorded) Apr 18 18:02:25.058: INFO: Container still-no-tolerations ready: false, restart count 0 Apr 18 18:02:25.058: INFO: pod-with-label-security-s1 from sched-priority-9346 started at 2024-04-18 18:02:07 +0000 UTC (1 container statuses recorded) Apr 18 18:02:25.058: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 Apr 18 18:02:25.058: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 18:02:25.064: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:02:25.064: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:02:25.064: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:02:25.064: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:02:25.064: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:02:25.064: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 18:02:25.064: INFO: pod-with-pod-antiaffinity from sched-priority-9346 started at 2024-04-18 18:02:14 +0000 UTC (1 container statuses recorded) Apr 18 18:02:25.064: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching test/e2e/scheduling/predicates.go:587 STEP: Trying to launch a pod without a toleration to get a node which can launch it. 04/18/24 18:02:25.064 Apr 18 18:02:25.073: INFO: Waiting up to 1m0s for pod "without-toleration" in namespace "sched-pred-1573" to be "running" Apr 18 18:02:25.077: INFO: Pod "without-toleration": Phase="Pending", Reason="", readiness=false. Elapsed: 3.376519ms Apr 18 18:02:27.081: INFO: Pod "without-toleration": Phase="Running", Reason="", readiness=true. Elapsed: 2.007926854s Apr 18 18:02:27.081: INFO: Pod "without-toleration" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 18:02:27.084 STEP: Trying to apply a random taint on the found node. 04/18/24 18:02:27.094 STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-dd5d4244-369b-4e47-bcdc-f48ff62d886b=testing-taint-value:NoSchedule 04/18/24 18:02:27.108 STEP: Trying to apply a random label on the found node. 04/18/24 18:02:27.112 STEP: verifying the node has the label kubernetes.io/e2e-label-key-5f9c012b-ed88-4c39-b4f0-a9e06bd82b2c testing-label-value 04/18/24 18:02:27.124 STEP: Trying to relaunch the pod, now with tolerations. 04/18/24 18:02:27.127 Apr 18 18:02:27.133: INFO: Waiting up to 5m0s for pod "with-tolerations" in namespace "sched-pred-1573" to be "not pending" Apr 18 18:02:27.136: INFO: Pod "with-tolerations": Phase="Pending", Reason="", readiness=false. Elapsed: 3.05781ms Apr 18 18:02:29.140: INFO: Pod "with-tolerations": Phase="Running", Reason="", readiness=true. Elapsed: 2.007330301s Apr 18 18:02:29.140: INFO: Pod "with-tolerations" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-label-key-5f9c012b-ed88-4c39-b4f0-a9e06bd82b2c off the node v126-worker 04/18/24 18:02:29.144 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-5f9c012b-ed88-4c39-b4f0-a9e06bd82b2c 04/18/24 18:02:29.159 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-dd5d4244-369b-4e47-bcdc-f48ff62d886b=testing-taint-value:NoSchedule 04/18/24 18:02:29.179 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 18:02:29.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-1573" for this suite. 04/18/24 18:02:29.187 ------------------------------ • [4.175 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching test/e2e/scheduling/predicates.go:587 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:02:25.017 Apr 18 18:02:25.017: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 18:02:25.019 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:02:25.031 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:02:25.035 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 18:02:25.039: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 18:02:25.047: INFO: Waiting for terminating namespaces to be deleted... Apr 18 18:02:25.051: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 18:02:25.057: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:02:25.057: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:02:25.057: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:02:25.058: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:02:25.058: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:02:25.058: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 18:02:25.058: INFO: still-no-tolerations from sched-pred-5748 started at 2024-04-18 18:02:23 +0000 UTC (1 container statuses recorded) Apr 18 18:02:25.058: INFO: Container still-no-tolerations ready: false, restart count 0 Apr 18 18:02:25.058: INFO: pod-with-label-security-s1 from sched-priority-9346 started at 2024-04-18 18:02:07 +0000 UTC (1 container statuses recorded) Apr 18 18:02:25.058: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 Apr 18 18:02:25.058: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 18:02:25.064: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 18:02:25.064: INFO: Container loopdev ready: true, restart count 0 Apr 18 18:02:25.064: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:02:25.064: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 18:02:25.064: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 18:02:25.064: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 18:02:25.064: INFO: pod-with-pod-antiaffinity from sched-priority-9346 started at 2024-04-18 18:02:14 +0000 UTC (1 container statuses recorded) Apr 18 18:02:25.064: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching test/e2e/scheduling/predicates.go:587 STEP: Trying to launch a pod without a toleration to get a node which can launch it. 04/18/24 18:02:25.064 Apr 18 18:02:25.073: INFO: Waiting up to 1m0s for pod "without-toleration" in namespace "sched-pred-1573" to be "running" Apr 18 18:02:25.077: INFO: Pod "without-toleration": Phase="Pending", Reason="", readiness=false. Elapsed: 3.376519ms Apr 18 18:02:27.081: INFO: Pod "without-toleration": Phase="Running", Reason="", readiness=true. Elapsed: 2.007926854s Apr 18 18:02:27.081: INFO: Pod "without-toleration" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 18:02:27.084 STEP: Trying to apply a random taint on the found node. 04/18/24 18:02:27.094 STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-dd5d4244-369b-4e47-bcdc-f48ff62d886b=testing-taint-value:NoSchedule 04/18/24 18:02:27.108 STEP: Trying to apply a random label on the found node. 04/18/24 18:02:27.112 STEP: verifying the node has the label kubernetes.io/e2e-label-key-5f9c012b-ed88-4c39-b4f0-a9e06bd82b2c testing-label-value 04/18/24 18:02:27.124 STEP: Trying to relaunch the pod, now with tolerations. 04/18/24 18:02:27.127 Apr 18 18:02:27.133: INFO: Waiting up to 5m0s for pod "with-tolerations" in namespace "sched-pred-1573" to be "not pending" Apr 18 18:02:27.136: INFO: Pod "with-tolerations": Phase="Pending", Reason="", readiness=false. Elapsed: 3.05781ms Apr 18 18:02:29.140: INFO: Pod "with-tolerations": Phase="Running", Reason="", readiness=true. Elapsed: 2.007330301s Apr 18 18:02:29.140: INFO: Pod "with-tolerations" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-label-key-5f9c012b-ed88-4c39-b4f0-a9e06bd82b2c off the node v126-worker 04/18/24 18:02:29.144 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-5f9c012b-ed88-4c39-b4f0-a9e06bd82b2c 04/18/24 18:02:29.159 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-dd5d4244-369b-4e47-bcdc-f48ff62d886b=testing-taint-value:NoSchedule 04/18/24 18:02:29.179 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 18:02:29.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-1573" for this suite. 04/18/24 18:02:29.187 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted test/e2e/scheduling/preemption.go:434 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:02:29.196 Apr 18 18:02:29.197: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/18/24 18:02:29.2 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:02:29.217 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:02:29.221 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 18 18:02:29.235: INFO: Waiting up to 1m0s for all nodes to be ready Apr 18 18:03:29.261: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:399 STEP: Trying to get 2 available nodes which can run pod 04/18/24 18:03:29.265 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 18:03:29.265 Apr 18 18:03:29.277: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-6453" to be "running" Apr 18 18:03:29.281: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.112573ms Apr 18 18:03:31.285: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007640517s Apr 18 18:03:31.285: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 18:03:31.289 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 18:03:31.297 Apr 18 18:03:31.303: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-6453" to be "running" Apr 18 18:03:31.306: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.112586ms Apr 18 18:03:33.311: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007588236s Apr 18 18:03:33.311: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 18:03:33.314 STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. 04/18/24 18:03:33.323 STEP: Apply 10 fake resource to node v126-worker. 04/18/24 18:03:33.338 STEP: Apply 10 fake resource to node v126-worker2. 04/18/24 18:03:33.367 [It] validates proper pods are preempted test/e2e/scheduling/preemption.go:434 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. 04/18/24 18:03:33.387 Apr 18 18:03:33.403: INFO: Waiting up to 1m0s for pod "high" in namespace "sched-preemption-6453" to be "running" Apr 18 18:03:33.406: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 2.661303ms Apr 18 18:03:35.410: INFO: Pod "high": Phase="Running", Reason="", readiness=true. Elapsed: 2.00743665s Apr 18 18:03:35.410: INFO: Pod "high" satisfied condition "running" Apr 18 18:03:35.420: INFO: Waiting up to 1m0s for pod "low-1" in namespace "sched-preemption-6453" to be "running" Apr 18 18:03:35.423: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.050767ms Apr 18 18:03:37.427: INFO: Pod "low-1": Phase="Running", Reason="", readiness=true. Elapsed: 2.007266477s Apr 18 18:03:37.427: INFO: Pod "low-1" satisfied condition "running" Apr 18 18:03:37.436: INFO: Waiting up to 1m0s for pod "low-2" in namespace "sched-preemption-6453" to be "running" Apr 18 18:03:37.439: INFO: Pod "low-2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.076904ms Apr 18 18:03:39.443: INFO: Pod "low-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007068604s Apr 18 18:03:39.443: INFO: Pod "low-2" satisfied condition "running" Apr 18 18:03:39.452: INFO: Waiting up to 1m0s for pod "low-3" in namespace "sched-preemption-6453" to be "running" Apr 18 18:03:39.455: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.038249ms Apr 18 18:03:41.459: INFO: Pod "low-3": Phase="Running", Reason="", readiness=true. Elapsed: 2.007319074s Apr 18 18:03:41.459: INFO: Pod "low-3" satisfied condition "running" STEP: Create 1 Medium Pod with TopologySpreadConstraints 04/18/24 18:03:41.463 Apr 18 18:03:41.469: INFO: Waiting up to 1m0s for pod "medium" in namespace "sched-preemption-6453" to be "running" Apr 18 18:03:41.472: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 3.298509ms Apr 18 18:03:43.477: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008435323s Apr 18 18:03:45.478: INFO: Pod "medium": Phase="Running", Reason="", readiness=true. Elapsed: 4.009029594s Apr 18 18:03:45.478: INFO: Pod "medium" satisfied condition "running" STEP: Verify there are 3 Pods left in this namespace 04/18/24 18:03:45.481 STEP: Pod "high" is as expected to be running. 04/18/24 18:03:45.486 STEP: Pod "low-1" is as expected to be running. 04/18/24 18:03:45.486 STEP: Pod "medium" is as expected to be running. 04/18/24 18:03:45.486 [AfterEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:421 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node v126-worker 04/18/24 18:03:45.486 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption 04/18/24 18:03:45.501 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node v126-worker2 04/18/24 18:03:45.504 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption 04/18/24 18:03:45.519 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 18:03:45.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-6453" for this suite. 04/18/24 18:03:45.58 ------------------------------ • [SLOW TEST] [76.387 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption test/e2e/scheduling/preemption.go:393 validates proper pods are preempted test/e2e/scheduling/preemption.go:434 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 18:02:29.196 Apr 18 18:02:29.197: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/18/24 18:02:29.2 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 18:02:29.217 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 18:02:29.221 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 18 18:02:29.235: INFO: Waiting up to 1m0s for all nodes to be ready Apr 18 18:03:29.261: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:399 STEP: Trying to get 2 available nodes which can run pod 04/18/24 18:03:29.265 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 18:03:29.265 Apr 18 18:03:29.277: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-6453" to be "running" Apr 18 18:03:29.281: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.112573ms Apr 18 18:03:31.285: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007640517s Apr 18 18:03:31.285: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 18:03:31.289 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 18:03:31.297 Apr 18 18:03:31.303: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-6453" to be "running" Apr 18 18:03:31.306: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.112586ms Apr 18 18:03:33.311: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007588236s Apr 18 18:03:33.311: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 18:03:33.314 STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. 04/18/24 18:03:33.323 STEP: Apply 10 fake resource to node v126-worker. 04/18/24 18:03:33.338 STEP: Apply 10 fake resource to node v126-worker2. 04/18/24 18:03:33.367 [It] validates proper pods are preempted test/e2e/scheduling/preemption.go:434 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. 04/18/24 18:03:33.387 Apr 18 18:03:33.403: INFO: Waiting up to 1m0s for pod "high" in namespace "sched-preemption-6453" to be "running" Apr 18 18:03:33.406: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 2.661303ms Apr 18 18:03:35.410: INFO: Pod "high": Phase="Running", Reason="", readiness=true. Elapsed: 2.00743665s Apr 18 18:03:35.410: INFO: Pod "high" satisfied condition "running" Apr 18 18:03:35.420: INFO: Waiting up to 1m0s for pod "low-1" in namespace "sched-preemption-6453" to be "running" Apr 18 18:03:35.423: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.050767ms Apr 18 18:03:37.427: INFO: Pod "low-1": Phase="Running", Reason="", readiness=true. Elapsed: 2.007266477s Apr 18 18:03:37.427: INFO: Pod "low-1" satisfied condition "running" Apr 18 18:03:37.436: INFO: Waiting up to 1m0s for pod "low-2" in namespace "sched-preemption-6453" to be "running" Apr 18 18:03:37.439: INFO: Pod "low-2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.076904ms Apr 18 18:03:39.443: INFO: Pod "low-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007068604s Apr 18 18:03:39.443: INFO: Pod "low-2" satisfied condition "running" Apr 18 18:03:39.452: INFO: Waiting up to 1m0s for pod "low-3" in namespace "sched-preemption-6453" to be "running" Apr 18 18:03:39.455: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.038249ms Apr 18 18:03:41.459: INFO: Pod "low-3": Phase="Running", Reason="", readiness=true. Elapsed: 2.007319074s Apr 18 18:03:41.459: INFO: Pod "low-3" satisfied condition "running" STEP: Create 1 Medium Pod with TopologySpreadConstraints 04/18/24 18:03:41.463 Apr 18 18:03:41.469: INFO: Waiting up to 1m0s for pod "medium" in namespace "sched-preemption-6453" to be "running" Apr 18 18:03:41.472: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 3.298509ms Apr 18 18:03:43.477: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008435323s Apr 18 18:03:45.478: INFO: Pod "medium": Phase="Running", Reason="", readiness=true. Elapsed: 4.009029594s Apr 18 18:03:45.478: INFO: Pod "medium" satisfied condition "running" STEP: Verify there are 3 Pods left in this namespace 04/18/24 18:03:45.481 STEP: Pod "high" is as expected to be running. 04/18/24 18:03:45.486 STEP: Pod "low-1" is as expected to be running. 04/18/24 18:03:45.486 STEP: Pod "medium" is as expected to be running. 04/18/24 18:03:45.486 [AfterEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:421 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node v126-worker 04/18/24 18:03:45.486 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption 04/18/24 18:03:45.501 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node v126-worker2 04/18/24 18:03:45.504 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption 04/18/24 18:03:45.519 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 18:03:45.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-6453" for this suite. 04/18/24 18:03:45.58 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [SynchronizedAfterSuite] test/e2e/e2e.go:88 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 Apr 18 18:03:45.668: INFO: Running AfterSuite actions on node 1 Apr 18 18:03:45.668: INFO: Skipping dumping logs from cluster ------------------------------ [SynchronizedAfterSuite] PASSED [0.000 seconds] [SynchronizedAfterSuite] test/e2e/e2e.go:88 Begin Captured GinkgoWriter Output >> [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 Apr 18 18:03:45.668: INFO: Running AfterSuite actions on node 1 Apr 18 18:03:45.668: INFO: Skipping dumping logs from cluster << End Captured GinkgoWriter Output ------------------------------ [ReportAfterSuite] Kubernetes e2e suite report test/e2e/e2e_test.go:153 [ReportAfterSuite] TOP-LEVEL test/e2e/e2e_test.go:153 ------------------------------ [ReportAfterSuite] PASSED [0.000 seconds] [ReportAfterSuite] Kubernetes e2e suite report test/e2e/e2e_test.go:153 Begin Captured GinkgoWriter Output >> [ReportAfterSuite] TOP-LEVEL test/e2e/e2e_test.go:153 << End Captured GinkgoWriter Output ------------------------------ [ReportAfterSuite] Kubernetes e2e JUnit report test/e2e/framework/test_context.go:529 [ReportAfterSuite] TOP-LEVEL test/e2e/framework/test_context.go:529 ------------------------------ [ReportAfterSuite] PASSED [0.114 seconds] [ReportAfterSuite] Kubernetes e2e JUnit report test/e2e/framework/test_context.go:529 Begin Captured GinkgoWriter Output >> [ReportAfterSuite] TOP-LEVEL test/e2e/framework/test_context.go:529 << End Captured GinkgoWriter Output ------------------------------ Ran 13 of 7069 Specs in 413.610 seconds SUCCESS! -- 13 Passed | 0 Failed | 0 Pending | 7056 Skipped PASS Ginkgo ran 1 suite in 6m54.075544103s Test Suite Passed