I0203 00:59:31.941647 16 e2e.go:116] Starting e2e run "4e142d29-2366-4233-91a8-d6183f89c9d7" on Ginkgo node 1 Feb 3 00:59:31.954: INFO: Enabling in-tree volume drivers Running Suite: Kubernetes e2e suite - /usr/local/bin ==================================================== Random Seed: 1675385971 - will randomize all specs Will run 14 of 7066 specs ------------------------------ [SynchronizedBeforeSuite] test/e2e/e2e.go:76 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 {"msg":"Test Suite starting","completed":0,"skipped":0,"failed":0} Feb 3 00:59:32.085: INFO: >>> kubeConfig: /home/xtesting/.kube/config Feb 3 00:59:32.087: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 3 00:59:32.114: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 3 00:59:32.144: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 3 00:59:32.144: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 3 00:59:32.144: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 3 00:59:32.150: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Feb 3 00:59:32.150: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Feb 3 00:59:32.150: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 3 00:59:32.150: INFO: e2e test version: v1.25.6 Feb 3 00:59:32.152: INFO: kube-apiserver version: v1.25.2 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 Feb 3 00:59:32.152: INFO: >>> kubeConfig: /home/xtesting/.kube/config Feb 3 00:59:32.157: INFO: Cluster IP family: ipv4 ------------------------------ [SynchronizedBeforeSuite] PASSED [0.072 seconds] [SynchronizedBeforeSuite] test/e2e/e2e.go:76 Begin Captured GinkgoWriter Output >> [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 Feb 3 00:59:32.085: INFO: >>> kubeConfig: /home/xtesting/.kube/config Feb 3 00:59:32.087: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 3 00:59:32.114: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 3 00:59:32.144: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 3 00:59:32.144: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 3 00:59:32.144: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 3 00:59:32.150: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Feb 3 00:59:32.150: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Feb 3 00:59:32.150: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 3 00:59:32.150: INFO: e2e test version: v1.25.6 Feb 3 00:59:32.152: INFO: kube-apiserver version: v1.25.2 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 Feb 3 00:59:32.152: INFO: >>> kubeConfig: /home/xtesting/.kube/config Feb 3 00:59:32.157: INFO: Cluster IP family: ipv4 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Serial] test/e2e/scheduling/ubernetes_lite.go:77 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 00:59:32.206 Feb 3 00:59:32.207: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename multi-az 02/03/23 00:59:32.208 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 00:59:32.218 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 00:59:32.222 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:51 STEP: Checking for multi-zone cluster. Schedulable zone count = 0 02/03/23 00:59:32.23 Feb 3 00:59:32.230: INFO: Schedulable zone count is 0, only run for multi-zone clusters, skipping test [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/framework.go:187 Feb 3 00:59:32.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-8976" for this suite. 02/03/23 00:59:32.234 [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:72 ------------------------------ S [SKIPPED] [0.032 seconds] [sig-scheduling] Multi-AZ Clusters [BeforeEach] test/e2e/scheduling/ubernetes_lite.go:51 should spread the pods of a service across zones [Serial] test/e2e/scheduling/ubernetes_lite.go:77 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 00:59:32.206 Feb 3 00:59:32.207: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename multi-az 02/03/23 00:59:32.208 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 00:59:32.218 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 00:59:32.222 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:51 STEP: Checking for multi-zone cluster. Schedulable zone count = 0 02/03/23 00:59:32.23 Feb 3 00:59:32.230: INFO: Schedulable zone count is 0, only run for multi-zone clusters, skipping test [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/framework.go:187 Feb 3 00:59:32.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-8976" for this suite. 02/03/23 00:59:32.234 [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:72 << End Captured GinkgoWriter Output Schedulable zone count is 0, only run for multi-zone clusters, skipping test In [BeforeEach] at: test/e2e/scheduling/ubernetes_lite.go:61 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching test/e2e/scheduling/predicates.go:582 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 00:59:32.294 Feb 3 00:59:32.294: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/03/23 00:59:32.295 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 00:59:32.304 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 00:59:32.307 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 3 00:59:32.311: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 00:59:32.318: INFO: Waiting for terminating namespaces to be deleted... Feb 3 00:59:32.321: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 3 00:59:32.326: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 00:59:32.326: INFO: Container loopdev ready: true, restart count 0 Feb 3 00:59:32.326: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:59:32.326: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 00:59:32.326: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:59:32.326: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 00:59:32.326: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 3 00:59:32.331: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 00:59:32.331: INFO: Container loopdev ready: true, restart count 0 Feb 3 00:59:32.331: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:59:32.331: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 00:59:32.331: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:59:32.331: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching test/e2e/scheduling/predicates.go:582 STEP: Trying to launch a pod without a toleration to get a node which can launch it. 02/03/23 00:59:32.331 Feb 3 00:59:32.338: INFO: Waiting up to 1m0s for pod "without-toleration" in namespace "sched-pred-9642" to be "running" Feb 3 00:59:32.341: INFO: Pod "without-toleration": Phase="Pending", Reason="", readiness=false. Elapsed: 2.890497ms Feb 3 00:59:34.349: INFO: Pod "without-toleration": Phase="Running", Reason="", readiness=true. Elapsed: 2.011356309s Feb 3 00:59:34.349: INFO: Pod "without-toleration" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 00:59:34.358 STEP: Trying to apply a random taint on the found node. 02/03/23 00:59:34.363 STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-00060afc-9e87-43fd-9023-5ac64c9f02f5=testing-taint-value:NoSchedule 02/03/23 00:59:34.377 STEP: Trying to apply a random label on the found node. 02/03/23 00:59:34.379 STEP: verifying the node has the label kubernetes.io/e2e-label-key-7e747659-a23c-45e8-ac08-d8baa2812c6a testing-label-value 02/03/23 00:59:34.386 STEP: Trying to relaunch the pod, now with tolerations. 02/03/23 00:59:34.389 Feb 3 00:59:34.392: INFO: Waiting up to 5m0s for pod "with-tolerations" in namespace "sched-pred-9642" to be "not pending" Feb 3 00:59:34.394: INFO: Pod "with-tolerations": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133937ms Feb 3 00:59:36.398: INFO: Pod "with-tolerations": Phase="Running", Reason="", readiness=true. Elapsed: 2.005907706s Feb 3 00:59:36.398: INFO: Pod "with-tolerations" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-label-key-7e747659-a23c-45e8-ac08-d8baa2812c6a off the node v125-worker2 02/03/23 00:59:36.401 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-7e747659-a23c-45e8-ac08-d8baa2812c6a 02/03/23 00:59:36.414 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-00060afc-9e87-43fd-9023-5ac64c9f02f5=testing-taint-value:NoSchedule 02/03/23 00:59:36.431 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 3 00:59:36.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9642" for this suite. 02/03/23 00:59:36.437 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","completed":1,"skipped":852,"failed":0} ------------------------------ • [4.147 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching test/e2e/scheduling/predicates.go:582 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 00:59:32.294 Feb 3 00:59:32.294: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/03/23 00:59:32.295 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 00:59:32.304 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 00:59:32.307 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 3 00:59:32.311: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 00:59:32.318: INFO: Waiting for terminating namespaces to be deleted... Feb 3 00:59:32.321: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 3 00:59:32.326: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 00:59:32.326: INFO: Container loopdev ready: true, restart count 0 Feb 3 00:59:32.326: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:59:32.326: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 00:59:32.326: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:59:32.326: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 00:59:32.326: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 3 00:59:32.331: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 00:59:32.331: INFO: Container loopdev ready: true, restart count 0 Feb 3 00:59:32.331: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:59:32.331: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 00:59:32.331: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:59:32.331: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching test/e2e/scheduling/predicates.go:582 STEP: Trying to launch a pod without a toleration to get a node which can launch it. 02/03/23 00:59:32.331 Feb 3 00:59:32.338: INFO: Waiting up to 1m0s for pod "without-toleration" in namespace "sched-pred-9642" to be "running" Feb 3 00:59:32.341: INFO: Pod "without-toleration": Phase="Pending", Reason="", readiness=false. Elapsed: 2.890497ms Feb 3 00:59:34.349: INFO: Pod "without-toleration": Phase="Running", Reason="", readiness=true. Elapsed: 2.011356309s Feb 3 00:59:34.349: INFO: Pod "without-toleration" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 00:59:34.358 STEP: Trying to apply a random taint on the found node. 02/03/23 00:59:34.363 STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-00060afc-9e87-43fd-9023-5ac64c9f02f5=testing-taint-value:NoSchedule 02/03/23 00:59:34.377 STEP: Trying to apply a random label on the found node. 02/03/23 00:59:34.379 STEP: verifying the node has the label kubernetes.io/e2e-label-key-7e747659-a23c-45e8-ac08-d8baa2812c6a testing-label-value 02/03/23 00:59:34.386 STEP: Trying to relaunch the pod, now with tolerations. 02/03/23 00:59:34.389 Feb 3 00:59:34.392: INFO: Waiting up to 5m0s for pod "with-tolerations" in namespace "sched-pred-9642" to be "not pending" Feb 3 00:59:34.394: INFO: Pod "with-tolerations": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133937ms Feb 3 00:59:36.398: INFO: Pod "with-tolerations": Phase="Running", Reason="", readiness=true. Elapsed: 2.005907706s Feb 3 00:59:36.398: INFO: Pod "with-tolerations" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-label-key-7e747659-a23c-45e8-ac08-d8baa2812c6a off the node v125-worker2 02/03/23 00:59:36.401 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-7e747659-a23c-45e8-ac08-d8baa2812c6a 02/03/23 00:59:36.414 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-00060afc-9e87-43fd-9023-5ac64c9f02f5=testing-taint-value:NoSchedule 02/03/23 00:59:36.431 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 3 00:59:36.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9642" for this suite. 02/03/23 00:59:36.437 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching test/e2e/scheduling/predicates.go:625 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 00:59:36.469 Feb 3 00:59:36.469: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/03/23 00:59:36.47 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 00:59:36.478 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 00:59:36.481 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 3 00:59:36.484: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 00:59:36.491: INFO: Waiting for terminating namespaces to be deleted... Feb 3 00:59:36.493: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 3 00:59:36.498: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 00:59:36.498: INFO: Container loopdev ready: true, restart count 0 Feb 3 00:59:36.498: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:59:36.498: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 00:59:36.498: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:59:36.498: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 00:59:36.498: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 3 00:59:36.502: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 00:59:36.502: INFO: Container loopdev ready: true, restart count 0 Feb 3 00:59:36.502: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:59:36.502: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 00:59:36.502: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:59:36.502: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 00:59:36.502: INFO: with-tolerations from sched-pred-9642 started at 2023-02-03 00:59:34 +0000 UTC (1 container statuses recorded) Feb 3 00:59:36.502: INFO: Container with-tolerations ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching test/e2e/scheduling/predicates.go:625 STEP: Trying to launch a pod without a toleration to get a node which can launch it. 02/03/23 00:59:36.502 Feb 3 00:59:36.507: INFO: Waiting up to 1m0s for pod "without-toleration" in namespace "sched-pred-4685" to be "running" Feb 3 00:59:36.509: INFO: Pod "without-toleration": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230098ms Feb 3 00:59:38.514: INFO: Pod "without-toleration": Phase="Running", Reason="", readiness=true. Elapsed: 2.007099696s Feb 3 00:59:38.514: INFO: Pod "without-toleration" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 00:59:38.517 STEP: Trying to apply a random taint on the found node. 02/03/23 00:59:38.524 STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-56dd0503-ac5c-4fe5-a1fa-045839962032=testing-taint-value:NoSchedule 02/03/23 00:59:38.538 STEP: Trying to apply a random label on the found node. 02/03/23 00:59:38.541 STEP: verifying the node has the label kubernetes.io/e2e-label-key-50bea4d6-dd80-4d3a-8545-ec6af76877f9 testing-label-value 02/03/23 00:59:38.553 STEP: Trying to relaunch the pod, still no tolerations. 02/03/23 00:59:38.556 STEP: Considering event: Type = [Normal], Name = [without-toleration.17402aab47a1dd16], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4685/without-toleration to v125-worker2] 02/03/23 00:59:38.559 STEP: Considering event: Type = [Normal], Name = [without-toleration.17402aab6872b39b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 00:59:38.559 STEP: Considering event: Type = [Normal], Name = [without-toleration.17402aab69284414], Reason = [Created], Message = [Created container without-toleration] 02/03/23 00:59:38.559 STEP: Considering event: Type = [Normal], Name = [without-toleration.17402aab76494480], Reason = [Started], Message = [Started container without-toleration] 02/03/23 00:59:38.56 STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.17402aabc20958f3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {kubernetes.io/e2e-taint-key-56dd0503-ac5c-4fe5-a1fa-045839962032: testing-taint-value}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.] 02/03/23 00:59:38.568 STEP: Removing taint off the node 02/03/23 00:59:39.569 STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.17402aabc20958f3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {kubernetes.io/e2e-taint-key-56dd0503-ac5c-4fe5-a1fa-045839962032: testing-taint-value}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.] 02/03/23 00:59:39.573 STEP: Considering event: Type = [Normal], Name = [without-toleration.17402aab47a1dd16], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4685/without-toleration to v125-worker2] 02/03/23 00:59:39.573 STEP: Considering event: Type = [Normal], Name = [without-toleration.17402aab6872b39b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 00:59:39.573 STEP: Considering event: Type = [Normal], Name = [without-toleration.17402aab69284414], Reason = [Created], Message = [Created container without-toleration] 02/03/23 00:59:39.574 STEP: Considering event: Type = [Normal], Name = [without-toleration.17402aab76494480], Reason = [Started], Message = [Started container without-toleration] 02/03/23 00:59:39.574 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-56dd0503-ac5c-4fe5-a1fa-045839962032=testing-taint-value:NoSchedule 02/03/23 00:59:39.59 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17402aabff6aabbf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4685/still-no-tolerations to v125-worker2] 02/03/23 00:59:39.598 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17402aac218c55f1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 00:59:40.171 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17402aac225d03ff], Reason = [Created], Message = [Created container still-no-tolerations] 02/03/23 00:59:40.184 STEP: Considering event: Type = [Normal], Name = [without-toleration.17402aac2880ff11], Reason = [Killing], Message = [Stopping container without-toleration] 02/03/23 00:59:40.288 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17402aac31a854fd], Reason = [Started], Message = [Started container still-no-tolerations] 02/03/23 00:59:40.441 STEP: removing the label kubernetes.io/e2e-label-key-50bea4d6-dd80-4d3a-8545-ec6af76877f9 off the node v125-worker2 02/03/23 00:59:40.597 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-50bea4d6-dd80-4d3a-8545-ec6af76877f9 02/03/23 00:59:40.609 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-56dd0503-ac5c-4fe5-a1fa-045839962032=testing-taint-value:NoSchedule 02/03/23 00:59:40.614 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 3 00:59:40.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4685" for this suite. 02/03/23 00:59:40.621 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","completed":2,"skipped":1544,"failed":0} ------------------------------ • [4.156 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching test/e2e/scheduling/predicates.go:625 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 00:59:36.469 Feb 3 00:59:36.469: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/03/23 00:59:36.47 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 00:59:36.478 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 00:59:36.481 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 3 00:59:36.484: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 00:59:36.491: INFO: Waiting for terminating namespaces to be deleted... Feb 3 00:59:36.493: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 3 00:59:36.498: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 00:59:36.498: INFO: Container loopdev ready: true, restart count 0 Feb 3 00:59:36.498: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:59:36.498: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 00:59:36.498: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:59:36.498: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 00:59:36.498: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 3 00:59:36.502: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 00:59:36.502: INFO: Container loopdev ready: true, restart count 0 Feb 3 00:59:36.502: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:59:36.502: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 00:59:36.502: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:59:36.502: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 00:59:36.502: INFO: with-tolerations from sched-pred-9642 started at 2023-02-03 00:59:34 +0000 UTC (1 container statuses recorded) Feb 3 00:59:36.502: INFO: Container with-tolerations ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching test/e2e/scheduling/predicates.go:625 STEP: Trying to launch a pod without a toleration to get a node which can launch it. 02/03/23 00:59:36.502 Feb 3 00:59:36.507: INFO: Waiting up to 1m0s for pod "without-toleration" in namespace "sched-pred-4685" to be "running" Feb 3 00:59:36.509: INFO: Pod "without-toleration": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230098ms Feb 3 00:59:38.514: INFO: Pod "without-toleration": Phase="Running", Reason="", readiness=true. Elapsed: 2.007099696s Feb 3 00:59:38.514: INFO: Pod "without-toleration" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 00:59:38.517 STEP: Trying to apply a random taint on the found node. 02/03/23 00:59:38.524 STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-56dd0503-ac5c-4fe5-a1fa-045839962032=testing-taint-value:NoSchedule 02/03/23 00:59:38.538 STEP: Trying to apply a random label on the found node. 02/03/23 00:59:38.541 STEP: verifying the node has the label kubernetes.io/e2e-label-key-50bea4d6-dd80-4d3a-8545-ec6af76877f9 testing-label-value 02/03/23 00:59:38.553 STEP: Trying to relaunch the pod, still no tolerations. 02/03/23 00:59:38.556 STEP: Considering event: Type = [Normal], Name = [without-toleration.17402aab47a1dd16], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4685/without-toleration to v125-worker2] 02/03/23 00:59:38.559 STEP: Considering event: Type = [Normal], Name = [without-toleration.17402aab6872b39b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 00:59:38.559 STEP: Considering event: Type = [Normal], Name = [without-toleration.17402aab69284414], Reason = [Created], Message = [Created container without-toleration] 02/03/23 00:59:38.559 STEP: Considering event: Type = [Normal], Name = [without-toleration.17402aab76494480], Reason = [Started], Message = [Started container without-toleration] 02/03/23 00:59:38.56 STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.17402aabc20958f3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {kubernetes.io/e2e-taint-key-56dd0503-ac5c-4fe5-a1fa-045839962032: testing-taint-value}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.] 02/03/23 00:59:38.568 STEP: Removing taint off the node 02/03/23 00:59:39.569 STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.17402aabc20958f3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {kubernetes.io/e2e-taint-key-56dd0503-ac5c-4fe5-a1fa-045839962032: testing-taint-value}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.] 02/03/23 00:59:39.573 STEP: Considering event: Type = [Normal], Name = [without-toleration.17402aab47a1dd16], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4685/without-toleration to v125-worker2] 02/03/23 00:59:39.573 STEP: Considering event: Type = [Normal], Name = [without-toleration.17402aab6872b39b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 00:59:39.573 STEP: Considering event: Type = [Normal], Name = [without-toleration.17402aab69284414], Reason = [Created], Message = [Created container without-toleration] 02/03/23 00:59:39.574 STEP: Considering event: Type = [Normal], Name = [without-toleration.17402aab76494480], Reason = [Started], Message = [Started container without-toleration] 02/03/23 00:59:39.574 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-56dd0503-ac5c-4fe5-a1fa-045839962032=testing-taint-value:NoSchedule 02/03/23 00:59:39.59 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17402aabff6aabbf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4685/still-no-tolerations to v125-worker2] 02/03/23 00:59:39.598 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17402aac218c55f1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 00:59:40.171 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17402aac225d03ff], Reason = [Created], Message = [Created container still-no-tolerations] 02/03/23 00:59:40.184 STEP: Considering event: Type = [Normal], Name = [without-toleration.17402aac2880ff11], Reason = [Killing], Message = [Stopping container without-toleration] 02/03/23 00:59:40.288 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17402aac31a854fd], Reason = [Started], Message = [Started container still-no-tolerations] 02/03/23 00:59:40.441 STEP: removing the label kubernetes.io/e2e-label-key-50bea4d6-dd80-4d3a-8545-ec6af76877f9 off the node v125-worker2 02/03/23 00:59:40.597 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-50bea4d6-dd80-4d3a-8545-ec6af76877f9 02/03/23 00:59:40.609 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-56dd0503-ac5c-4fe5-a1fa-045839962032=testing-taint-value:NoSchedule 02/03/23 00:59:40.614 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 3 00:59:40.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4685" for this suite. 02/03/23 00:59:40.621 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted test/e2e/scheduling/preemption.go:355 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 00:59:40.656 Feb 3 00:59:40.656: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 02/03/23 00:59:40.657 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 00:59:40.664 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 00:59:40.667 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Feb 3 00:59:40.677: INFO: Waiting up to 1m0s for all nodes to be ready Feb 3 01:00:40.700: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:322 STEP: Trying to get 2 available nodes which can run pod 02/03/23 01:00:40.703 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/03/23 01:00:40.703 Feb 3 01:00:40.713: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-5921" to be "running" Feb 3 01:00:40.717: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.287062ms Feb 3 01:00:42.721: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007445951s Feb 3 01:00:42.721: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 01:00:42.724 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/03/23 01:00:42.732 Feb 3 01:00:42.737: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-5921" to be "running" Feb 3 01:00:42.740: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.829256ms Feb 3 01:00:44.745: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007897592s Feb 3 01:00:44.745: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 01:00:44.748 STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. 02/03/23 01:00:44.754 STEP: Apply 10 fake resource to node v125-worker2. 02/03/23 01:00:44.769 STEP: Apply 10 fake resource to node v125-worker. 02/03/23 01:00:44.795 [It] validates proper pods are preempted test/e2e/scheduling/preemption.go:355 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. 02/03/23 01:00:44.805 Feb 3 01:00:44.809: INFO: Waiting up to 1m0s for pod "high" in namespace "sched-preemption-5921" to be "running" Feb 3 01:00:44.812: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 2.824625ms Feb 3 01:00:46.816: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006656837s Feb 3 01:00:48.818: INFO: Pod "high": Phase="Running", Reason="", readiness=true. Elapsed: 4.008847393s Feb 3 01:00:48.818: INFO: Pod "high" satisfied condition "running" Feb 3 01:00:48.827: INFO: Waiting up to 1m0s for pod "low-1" in namespace "sched-preemption-5921" to be "running" Feb 3 01:00:48.830: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.798824ms Feb 3 01:00:50.834: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006725567s Feb 3 01:00:52.834: INFO: Pod "low-1": Phase="Running", Reason="", readiness=true. Elapsed: 4.006811491s Feb 3 01:00:52.834: INFO: Pod "low-1" satisfied condition "running" Feb 3 01:00:52.843: INFO: Waiting up to 1m0s for pod "low-2" in namespace "sched-preemption-5921" to be "running" Feb 3 01:00:52.846: INFO: Pod "low-2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.915425ms Feb 3 01:00:54.851: INFO: Pod "low-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.008132859s Feb 3 01:00:54.851: INFO: Pod "low-2" satisfied condition "running" Feb 3 01:00:54.859: INFO: Waiting up to 1m0s for pod "low-3" in namespace "sched-preemption-5921" to be "running" Feb 3 01:00:54.863: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.517614ms Feb 3 01:00:56.868: INFO: Pod "low-3": Phase="Running", Reason="", readiness=true. Elapsed: 2.008281536s Feb 3 01:00:56.868: INFO: Pod "low-3" satisfied condition "running" STEP: Create 1 Medium Pod with TopologySpreadConstraints 02/03/23 01:00:56.871 Feb 3 01:00:56.876: INFO: Waiting up to 1m0s for pod "medium" in namespace "sched-preemption-5921" to be "running" Feb 3 01:00:56.880: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 3.306991ms Feb 3 01:00:58.885: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008297799s Feb 3 01:01:00.885: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008952149s Feb 3 01:01:02.884: INFO: Pod "medium": Phase="Running", Reason="", readiness=true. Elapsed: 6.007455888s Feb 3 01:01:02.884: INFO: Pod "medium" satisfied condition "running" STEP: Verify there are 3 Pods left in this namespace 02/03/23 01:01:02.887 STEP: Pod "high" is as expected to be running. 02/03/23 01:01:02.89 STEP: Pod "low-1" is as expected to be running. 02/03/23 01:01:02.891 STEP: Pod "medium" is as expected to be running. 02/03/23 01:01:02.891 [AfterEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:343 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node v125-worker2 02/03/23 01:01:02.891 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption 02/03/23 01:01:02.904 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node v125-worker 02/03/23 01:01:02.908 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption 02/03/23 01:01:02.921 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Feb 3 01:01:02.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5921" for this suite. 02/03/23 01:01:02.95 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","completed":3,"skipped":2273,"failed":0} ------------------------------ • [SLOW TEST] [82.328 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption test/e2e/scheduling/preemption.go:316 validates proper pods are preempted test/e2e/scheduling/preemption.go:355 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 00:59:40.656 Feb 3 00:59:40.656: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 02/03/23 00:59:40.657 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 00:59:40.664 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 00:59:40.667 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Feb 3 00:59:40.677: INFO: Waiting up to 1m0s for all nodes to be ready Feb 3 01:00:40.700: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:322 STEP: Trying to get 2 available nodes which can run pod 02/03/23 01:00:40.703 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/03/23 01:00:40.703 Feb 3 01:00:40.713: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-5921" to be "running" Feb 3 01:00:40.717: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.287062ms Feb 3 01:00:42.721: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007445951s Feb 3 01:00:42.721: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 01:00:42.724 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/03/23 01:00:42.732 Feb 3 01:00:42.737: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-5921" to be "running" Feb 3 01:00:42.740: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.829256ms Feb 3 01:00:44.745: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007897592s Feb 3 01:00:44.745: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 01:00:44.748 STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. 02/03/23 01:00:44.754 STEP: Apply 10 fake resource to node v125-worker2. 02/03/23 01:00:44.769 STEP: Apply 10 fake resource to node v125-worker. 02/03/23 01:00:44.795 [It] validates proper pods are preempted test/e2e/scheduling/preemption.go:355 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. 02/03/23 01:00:44.805 Feb 3 01:00:44.809: INFO: Waiting up to 1m0s for pod "high" in namespace "sched-preemption-5921" to be "running" Feb 3 01:00:44.812: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 2.824625ms Feb 3 01:00:46.816: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006656837s Feb 3 01:00:48.818: INFO: Pod "high": Phase="Running", Reason="", readiness=true. Elapsed: 4.008847393s Feb 3 01:00:48.818: INFO: Pod "high" satisfied condition "running" Feb 3 01:00:48.827: INFO: Waiting up to 1m0s for pod "low-1" in namespace "sched-preemption-5921" to be "running" Feb 3 01:00:48.830: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.798824ms Feb 3 01:00:50.834: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006725567s Feb 3 01:00:52.834: INFO: Pod "low-1": Phase="Running", Reason="", readiness=true. Elapsed: 4.006811491s Feb 3 01:00:52.834: INFO: Pod "low-1" satisfied condition "running" Feb 3 01:00:52.843: INFO: Waiting up to 1m0s for pod "low-2" in namespace "sched-preemption-5921" to be "running" Feb 3 01:00:52.846: INFO: Pod "low-2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.915425ms Feb 3 01:00:54.851: INFO: Pod "low-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.008132859s Feb 3 01:00:54.851: INFO: Pod "low-2" satisfied condition "running" Feb 3 01:00:54.859: INFO: Waiting up to 1m0s for pod "low-3" in namespace "sched-preemption-5921" to be "running" Feb 3 01:00:54.863: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.517614ms Feb 3 01:00:56.868: INFO: Pod "low-3": Phase="Running", Reason="", readiness=true. Elapsed: 2.008281536s Feb 3 01:00:56.868: INFO: Pod "low-3" satisfied condition "running" STEP: Create 1 Medium Pod with TopologySpreadConstraints 02/03/23 01:00:56.871 Feb 3 01:00:56.876: INFO: Waiting up to 1m0s for pod "medium" in namespace "sched-preemption-5921" to be "running" Feb 3 01:00:56.880: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 3.306991ms Feb 3 01:00:58.885: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008297799s Feb 3 01:01:00.885: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008952149s Feb 3 01:01:02.884: INFO: Pod "medium": Phase="Running", Reason="", readiness=true. Elapsed: 6.007455888s Feb 3 01:01:02.884: INFO: Pod "medium" satisfied condition "running" STEP: Verify there are 3 Pods left in this namespace 02/03/23 01:01:02.887 STEP: Pod "high" is as expected to be running. 02/03/23 01:01:02.89 STEP: Pod "low-1" is as expected to be running. 02/03/23 01:01:02.891 STEP: Pod "medium" is as expected to be running. 02/03/23 01:01:02.891 [AfterEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:343 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node v125-worker2 02/03/23 01:01:02.891 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption 02/03/23 01:01:02.904 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node v125-worker 02/03/23 01:01:02.908 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption 02/03/23 01:01:02.921 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Feb 3 01:01:02.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5921" for this suite. 02/03/23 01:01:02.95 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 << End Captured GinkgoWriter Output ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes test/e2e/scheduling/predicates.go:743 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 01:01:02.984 Feb 3 01:01:02.984: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/03/23 01:01:02.985 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 01:01:02.993 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 01:01:02.995 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 3 01:01:02.998: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 01:01:03.003: INFO: Waiting for terminating namespaces to be deleted... Feb 3 01:01:03.006: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 3 01:01:03.011: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:01:03.011: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:03.011: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:03.011: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:03.011: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:03.011: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:03.011: INFO: low-1 from sched-preemption-5921 started at 2023-02-03 01:00:50 +0000 UTC (1 container statuses recorded) Feb 3 01:01:03.011: INFO: Container low-1 ready: true, restart count 0 Feb 3 01:01:03.011: INFO: medium from sched-preemption-5921 started at 2023-02-03 01:01:00 +0000 UTC (1 container statuses recorded) Feb 3 01:01:03.011: INFO: Container medium ready: true, restart count 0 Feb 3 01:01:03.011: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 3 01:01:03.016: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:01:03.016: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:03.016: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:03.016: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:03.016: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:03.016: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:03.016: INFO: high from sched-preemption-5921 started at 2023-02-03 01:00:46 +0000 UTC (1 container statuses recorded) Feb 3 01:01:03.016: INFO: Container high ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering test/e2e/scheduling/predicates.go:726 STEP: Trying to get 2 available nodes which can run pod 02/03/23 01:01:03.016 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/03/23 01:01:03.016 Feb 3 01:01:03.021: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-1434" to be "running" Feb 3 01:01:03.024: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.273132ms Feb 3 01:01:05.029: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.00748616s Feb 3 01:01:05.029: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 01:01:05.032 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/03/23 01:01:05.04 Feb 3 01:01:05.045: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-1434" to be "running" Feb 3 01:01:05.048: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.943228ms Feb 3 01:01:07.052: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007103909s Feb 3 01:01:07.052: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 01:01:07.055 STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. 02/03/23 01:01:07.063 [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes test/e2e/scheduling/predicates.go:743 [AfterEach] PodTopologySpread Filtering test/e2e/scheduling/predicates.go:737 STEP: removing the label kubernetes.io/e2e-pts-filter off the node v125-worker2 02/03/23 01:01:09.101 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter 02/03/23 01:01:09.115 STEP: removing the label kubernetes.io/e2e-pts-filter off the node v125-worker 02/03/23 01:01:09.118 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter 02/03/23 01:01:09.129 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 3 01:01:09.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1434" for this suite. 02/03/23 01:01:09.136 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","completed":4,"skipped":2275,"failed":0} ------------------------------ • [SLOW TEST] [6.157 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering test/e2e/scheduling/predicates.go:722 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes test/e2e/scheduling/predicates.go:743 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 01:01:02.984 Feb 3 01:01:02.984: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/03/23 01:01:02.985 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 01:01:02.993 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 01:01:02.995 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 3 01:01:02.998: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 01:01:03.003: INFO: Waiting for terminating namespaces to be deleted... Feb 3 01:01:03.006: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 3 01:01:03.011: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:01:03.011: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:03.011: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:03.011: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:03.011: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:03.011: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:03.011: INFO: low-1 from sched-preemption-5921 started at 2023-02-03 01:00:50 +0000 UTC (1 container statuses recorded) Feb 3 01:01:03.011: INFO: Container low-1 ready: true, restart count 0 Feb 3 01:01:03.011: INFO: medium from sched-preemption-5921 started at 2023-02-03 01:01:00 +0000 UTC (1 container statuses recorded) Feb 3 01:01:03.011: INFO: Container medium ready: true, restart count 0 Feb 3 01:01:03.011: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 3 01:01:03.016: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:01:03.016: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:03.016: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:03.016: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:03.016: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:03.016: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:03.016: INFO: high from sched-preemption-5921 started at 2023-02-03 01:00:46 +0000 UTC (1 container statuses recorded) Feb 3 01:01:03.016: INFO: Container high ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering test/e2e/scheduling/predicates.go:726 STEP: Trying to get 2 available nodes which can run pod 02/03/23 01:01:03.016 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/03/23 01:01:03.016 Feb 3 01:01:03.021: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-1434" to be "running" Feb 3 01:01:03.024: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.273132ms Feb 3 01:01:05.029: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.00748616s Feb 3 01:01:05.029: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 01:01:05.032 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/03/23 01:01:05.04 Feb 3 01:01:05.045: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-1434" to be "running" Feb 3 01:01:05.048: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.943228ms Feb 3 01:01:07.052: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007103909s Feb 3 01:01:07.052: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 01:01:07.055 STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. 02/03/23 01:01:07.063 [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes test/e2e/scheduling/predicates.go:743 [AfterEach] PodTopologySpread Filtering test/e2e/scheduling/predicates.go:737 STEP: removing the label kubernetes.io/e2e-pts-filter off the node v125-worker2 02/03/23 01:01:09.101 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter 02/03/23 01:01:09.115 STEP: removing the label kubernetes.io/e2e-pts-filter off the node v125-worker 02/03/23 01:01:09.118 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter 02/03/23 01:01:09.129 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 3 01:01:09.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1434" for this suite. 02/03/23 01:01:09.136 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol test/e2e/scheduling/predicates.go:660 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 01:01:09.19 Feb 3 01:01:09.190: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/03/23 01:01:09.191 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 01:01:09.199 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 01:01:09.201 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 3 01:01:09.204: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 01:01:09.211: INFO: Waiting for terminating namespaces to be deleted... Feb 3 01:01:09.215: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 3 01:01:09.221: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:01:09.221: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:09.221: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:09.221: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:09.221: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:09.221: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:09.221: INFO: rs-e2e-pts-filter-mktw6 from sched-pred-1434 started at 2023-02-03 01:01:07 +0000 UTC (1 container statuses recorded) Feb 3 01:01:09.221: INFO: Container e2e-pts-filter ready: true, restart count 0 Feb 3 01:01:09.221: INFO: rs-e2e-pts-filter-wwnfh from sched-pred-1434 started at 2023-02-03 01:01:07 +0000 UTC (1 container statuses recorded) Feb 3 01:01:09.221: INFO: Container e2e-pts-filter ready: true, restart count 0 Feb 3 01:01:09.221: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 3 01:01:09.226: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:01:09.226: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:09.226: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:09.226: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:09.226: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:09.226: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:09.226: INFO: rs-e2e-pts-filter-7jgtk from sched-pred-1434 started at 2023-02-03 01:01:07 +0000 UTC (1 container statuses recorded) Feb 3 01:01:09.226: INFO: Container e2e-pts-filter ready: true, restart count 0 Feb 3 01:01:09.226: INFO: rs-e2e-pts-filter-jdqj9 from sched-pred-1434 started at 2023-02-03 01:01:07 +0000 UTC (1 container statuses recorded) Feb 3 01:01:09.226: INFO: Container e2e-pts-filter ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol test/e2e/scheduling/predicates.go:660 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/03/23 01:01:09.226 Feb 3 01:01:09.233: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-75" to be "running" Feb 3 01:01:09.235: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.54637ms Feb 3 01:01:11.240: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.006998693s Feb 3 01:01:11.240: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 01:01:11.243 STEP: Trying to apply a random label on the found node. 02/03/23 01:01:11.252 STEP: verifying the node has the label kubernetes.io/e2e-63e25c7b-64e3-4948-8a7b-d65466eecf00 90 02/03/23 01:01:11.263 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled 02/03/23 01:01:11.266 Feb 3 01:01:11.270: INFO: Waiting up to 5m0s for pod "pod1" in namespace "sched-pred-75" to be "not pending" Feb 3 01:01:11.274: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.193807ms Feb 3 01:01:13.277: INFO: Pod "pod1": Phase="Running", Reason="", readiness=false. Elapsed: 2.006293081s Feb 3 01:01:13.277: INFO: Pod "pod1" satisfied condition "not pending" STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.20.0.10 on the node which pod1 resides and expect scheduled 02/03/23 01:01:13.277 Feb 3 01:01:13.281: INFO: Waiting up to 5m0s for pod "pod2" in namespace "sched-pred-75" to be "not pending" Feb 3 01:01:13.284: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.969752ms Feb 3 01:01:15.289: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007756678s Feb 3 01:01:15.289: INFO: Pod "pod2" satisfied condition "not pending" STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.20.0.10 but use UDP protocol on the node which pod2 resides 02/03/23 01:01:15.289 Feb 3 01:01:15.296: INFO: Waiting up to 5m0s for pod "pod3" in namespace "sched-pred-75" to be "not pending" Feb 3 01:01:15.299: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.113614ms Feb 3 01:01:17.304: INFO: Pod "pod3": Phase="Running", Reason="", readiness=false. Elapsed: 2.007362542s Feb 3 01:01:17.304: INFO: Pod "pod3" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-63e25c7b-64e3-4948-8a7b-d65466eecf00 off the node v125-worker 02/03/23 01:01:17.304 STEP: verifying the node doesn't have the label kubernetes.io/e2e-63e25c7b-64e3-4948-8a7b-d65466eecf00 02/03/23 01:01:17.317 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 3 01:01:17.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-75" for this suite. 02/03/23 01:01:17.325 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","completed":5,"skipped":3089,"failed":0} ------------------------------ • [SLOW TEST] [8.140 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol test/e2e/scheduling/predicates.go:660 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 01:01:09.19 Feb 3 01:01:09.190: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/03/23 01:01:09.191 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 01:01:09.199 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 01:01:09.201 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 3 01:01:09.204: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 01:01:09.211: INFO: Waiting for terminating namespaces to be deleted... Feb 3 01:01:09.215: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 3 01:01:09.221: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:01:09.221: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:09.221: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:09.221: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:09.221: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:09.221: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:09.221: INFO: rs-e2e-pts-filter-mktw6 from sched-pred-1434 started at 2023-02-03 01:01:07 +0000 UTC (1 container statuses recorded) Feb 3 01:01:09.221: INFO: Container e2e-pts-filter ready: true, restart count 0 Feb 3 01:01:09.221: INFO: rs-e2e-pts-filter-wwnfh from sched-pred-1434 started at 2023-02-03 01:01:07 +0000 UTC (1 container statuses recorded) Feb 3 01:01:09.221: INFO: Container e2e-pts-filter ready: true, restart count 0 Feb 3 01:01:09.221: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 3 01:01:09.226: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:01:09.226: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:09.226: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:09.226: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:09.226: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:09.226: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:09.226: INFO: rs-e2e-pts-filter-7jgtk from sched-pred-1434 started at 2023-02-03 01:01:07 +0000 UTC (1 container statuses recorded) Feb 3 01:01:09.226: INFO: Container e2e-pts-filter ready: true, restart count 0 Feb 3 01:01:09.226: INFO: rs-e2e-pts-filter-jdqj9 from sched-pred-1434 started at 2023-02-03 01:01:07 +0000 UTC (1 container statuses recorded) Feb 3 01:01:09.226: INFO: Container e2e-pts-filter ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol test/e2e/scheduling/predicates.go:660 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/03/23 01:01:09.226 Feb 3 01:01:09.233: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-75" to be "running" Feb 3 01:01:09.235: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.54637ms Feb 3 01:01:11.240: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.006998693s Feb 3 01:01:11.240: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 01:01:11.243 STEP: Trying to apply a random label on the found node. 02/03/23 01:01:11.252 STEP: verifying the node has the label kubernetes.io/e2e-63e25c7b-64e3-4948-8a7b-d65466eecf00 90 02/03/23 01:01:11.263 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled 02/03/23 01:01:11.266 Feb 3 01:01:11.270: INFO: Waiting up to 5m0s for pod "pod1" in namespace "sched-pred-75" to be "not pending" Feb 3 01:01:11.274: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.193807ms Feb 3 01:01:13.277: INFO: Pod "pod1": Phase="Running", Reason="", readiness=false. Elapsed: 2.006293081s Feb 3 01:01:13.277: INFO: Pod "pod1" satisfied condition "not pending" STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.20.0.10 on the node which pod1 resides and expect scheduled 02/03/23 01:01:13.277 Feb 3 01:01:13.281: INFO: Waiting up to 5m0s for pod "pod2" in namespace "sched-pred-75" to be "not pending" Feb 3 01:01:13.284: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.969752ms Feb 3 01:01:15.289: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007756678s Feb 3 01:01:15.289: INFO: Pod "pod2" satisfied condition "not pending" STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.20.0.10 but use UDP protocol on the node which pod2 resides 02/03/23 01:01:15.289 Feb 3 01:01:15.296: INFO: Waiting up to 5m0s for pod "pod3" in namespace "sched-pred-75" to be "not pending" Feb 3 01:01:15.299: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.113614ms Feb 3 01:01:17.304: INFO: Pod "pod3": Phase="Running", Reason="", readiness=false. Elapsed: 2.007362542s Feb 3 01:01:17.304: INFO: Pod "pod3" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-63e25c7b-64e3-4948-8a7b-d65466eecf00 off the node v125-worker 02/03/23 01:01:17.304 STEP: verifying the node doesn't have the label kubernetes.io/e2e-63e25c7b-64e3-4948-8a7b-d65466eecf00 02/03/23 01:01:17.317 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 3 01:01:17.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-75" for this suite. 02/03/23 01:01:17.325 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] test/e2e/scheduling/predicates.go:122 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 01:01:17.388 Feb 3 01:01:17.388: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/03/23 01:01:17.39 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 01:01:17.402 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 01:01:17.406 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 3 01:01:17.409: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 01:01:17.416: INFO: Waiting for terminating namespaces to be deleted... Feb 3 01:01:17.419: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 3 01:01:17.424: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:01:17.424: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:17.424: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:17.424: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:17.424: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:17.424: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:17.424: INFO: pod1 from sched-pred-75 started at 2023-02-03 01:01:11 +0000 UTC (1 container statuses recorded) Feb 3 01:01:17.424: INFO: Container agnhost ready: true, restart count 0 Feb 3 01:01:17.424: INFO: pod2 from sched-pred-75 started at 2023-02-03 01:01:13 +0000 UTC (1 container statuses recorded) Feb 3 01:01:17.424: INFO: Container agnhost ready: true, restart count 0 Feb 3 01:01:17.424: INFO: pod3 from sched-pred-75 started at 2023-02-03 01:01:15 +0000 UTC (1 container statuses recorded) Feb 3 01:01:17.424: INFO: Container agnhost ready: false, restart count 0 Feb 3 01:01:17.424: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 3 01:01:17.429: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:01:17.429: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:17.429: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:17.429: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:17.429: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:17.429: INFO: Container kube-proxy ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] test/e2e/scheduling/predicates.go:122 Feb 3 01:01:17.442: INFO: Pod create-loop-devs-d5nrm requesting local ephemeral resource =0 on Node v125-worker Feb 3 01:01:17.442: INFO: Pod create-loop-devs-tlwgp requesting local ephemeral resource =0 on Node v125-worker2 Feb 3 01:01:17.442: INFO: Pod kindnet-h8fbr requesting local ephemeral resource =0 on Node v125-worker2 Feb 3 01:01:17.442: INFO: Pod kindnet-xhfn8 requesting local ephemeral resource =0 on Node v125-worker Feb 3 01:01:17.442: INFO: Pod kube-proxy-bvl9x requesting local ephemeral resource =0 on Node v125-worker2 Feb 3 01:01:17.442: INFO: Pod kube-proxy-pxrcg requesting local ephemeral resource =0 on Node v125-worker Feb 3 01:01:17.443: INFO: Pod pod1 requesting local ephemeral resource =0 on Node v125-worker Feb 3 01:01:17.443: INFO: Pod pod2 requesting local ephemeral resource =0 on Node v125-worker Feb 3 01:01:17.443: INFO: Pod pod3 requesting local ephemeral resource =0 on Node v125-worker Feb 3 01:01:17.443: INFO: Using pod capacity: 47055905587 Feb 3 01:01:17.443: INFO: Node: v125-worker has local ephemeral resource allocatable: 470559055872 Feb 3 01:01:17.443: INFO: Node: v125-worker2 has local ephemeral resource allocatable: 470559055872 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one 02/03/23 01:01:17.443 Feb 3 01:01:17.531: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.17402ac2c84f15bd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-0 to v125-worker2] 02/03/23 01:01:27.589 STEP: Considering event: Type = [Normal], Name = [overcommit-0.17402ac2fbd6092f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.589 STEP: Considering event: Type = [Normal], Name = [overcommit-0.17402ac2fc91c8a6], Reason = [Created], Message = [Created container overcommit-0] 02/03/23 01:01:27.59 STEP: Considering event: Type = [Normal], Name = [overcommit-0.17402ac30c4bdbb5], Reason = [Started], Message = [Started container overcommit-0] 02/03/23 01:01:27.59 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17402ac2c887f12f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-1 to v125-worker] 02/03/23 01:01:27.59 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17402ac32e7a3c41], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.59 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17402ac32f7dd34e], Reason = [Created], Message = [Created container overcommit-1] 02/03/23 01:01:27.59 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17402ac337e8f5bd], Reason = [Started], Message = [Started container overcommit-1] 02/03/23 01:01:27.59 STEP: Considering event: Type = [Normal], Name = [overcommit-10.17402ac2cae76373], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-10 to v125-worker2] 02/03/23 01:01:27.59 STEP: Considering event: Type = [Normal], Name = [overcommit-10.17402ac32296d80f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.59 STEP: Considering event: Type = [Normal], Name = [overcommit-10.17402ac3235aa3b0], Reason = [Created], Message = [Created container overcommit-10] 02/03/23 01:01:27.59 STEP: Considering event: Type = [Normal], Name = [overcommit-10.17402ac32fd35dcd], Reason = [Started], Message = [Started container overcommit-10] 02/03/23 01:01:27.59 STEP: Considering event: Type = [Normal], Name = [overcommit-11.17402ac2cb345c75], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-11 to v125-worker] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-11.17402ac344cbee08], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-11.17402ac34577cefa], Reason = [Created], Message = [Created container overcommit-11] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-11.17402ac34ee82dcd], Reason = [Started], Message = [Started container overcommit-11] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-12.17402ac2cb593bf6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-12 to v125-worker] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-12.17402ac2fedc77c0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-12.17402ac2ff9bb0f0], Reason = [Created], Message = [Created container overcommit-12] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-12.17402ac30b48357f], Reason = [Started], Message = [Started container overcommit-12] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-13.17402ac2cba0fe4a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-13 to v125-worker2] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-13.17402ac2f284d713], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-13.17402ac2f33660a9], Reason = [Created], Message = [Created container overcommit-13] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-13.17402ac3008de6ce], Reason = [Started], Message = [Started container overcommit-13] 02/03/23 01:01:27.592 STEP: Considering event: Type = [Normal], Name = [overcommit-14.17402ac2cbd6908a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-14 to v125-worker] 02/03/23 01:01:27.592 STEP: Considering event: Type = [Normal], Name = [overcommit-14.17402ac3242fceb4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.592 STEP: Considering event: Type = [Normal], Name = [overcommit-14.17402ac324e661ee], Reason = [Created], Message = [Created container overcommit-14] 02/03/23 01:01:27.592 STEP: Considering event: Type = [Normal], Name = [overcommit-14.17402ac32f820ca2], Reason = [Started], Message = [Started container overcommit-14] 02/03/23 01:01:27.592 STEP: Considering event: Type = [Normal], Name = [overcommit-15.17402ac2cc1201fe], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-15 to v125-worker2] 02/03/23 01:01:27.592 STEP: Considering event: Type = [Normal], Name = [overcommit-15.17402ac35c5dc8f2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.592 STEP: Considering event: Type = [Normal], Name = [overcommit-15.17402ac35d121be0], Reason = [Created], Message = [Created container overcommit-15] 02/03/23 01:01:27.592 STEP: Considering event: Type = [Normal], Name = [overcommit-15.17402ac36a2225e7], Reason = [Started], Message = [Started container overcommit-15] 02/03/23 01:01:27.592 STEP: Considering event: Type = [Normal], Name = [overcommit-16.17402ac2cc54c562], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-16 to v125-worker2] 02/03/23 01:01:27.592 STEP: Considering event: Type = [Normal], Name = [overcommit-16.17402ac36dc9d669], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-16.17402ac36e8a7781], Reason = [Created], Message = [Created container overcommit-16] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-16.17402ac37b76bbd1], Reason = [Started], Message = [Started container overcommit-16] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-17.17402ac2cc9041f5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-17 to v125-worker] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-17.17402ac36c3ef287], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-17.17402ac36d009cd8], Reason = [Created], Message = [Created container overcommit-17] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-17.17402ac37b72f564], Reason = [Started], Message = [Started container overcommit-17] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-18.17402ac2ccce81a1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-18 to v125-worker] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-18.17402ac35c901382], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-18.17402ac35d41e325], Reason = [Created], Message = [Created container overcommit-18] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-18.17402ac3691fb5f9], Reason = [Started], Message = [Started container overcommit-18] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-19.17402ac2cd12a1ec], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-19 to v125-worker] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-19.17402ac35c46a6e5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.594 STEP: Considering event: Type = [Normal], Name = [overcommit-19.17402ac35d02b337], Reason = [Created], Message = [Created container overcommit-19] 02/03/23 01:01:27.594 STEP: Considering event: Type = [Normal], Name = [overcommit-19.17402ac36958ab4c], Reason = [Started], Message = [Started container overcommit-19] 02/03/23 01:01:27.594 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17402ac2c8bafe89], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-2 to v125-worker2] 02/03/23 01:01:27.594 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17402ac32e757903], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.594 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17402ac32f7dfa5e], Reason = [Created], Message = [Created container overcommit-2] 02/03/23 01:01:27.594 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17402ac337801191], Reason = [Started], Message = [Started container overcommit-2] 02/03/23 01:01:27.594 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17402ac2c8ff8233], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-3 to v125-worker] 02/03/23 01:01:27.594 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17402ac30f54e27f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.594 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17402ac30ffdd085], Reason = [Created], Message = [Created container overcommit-3] 02/03/23 01:01:27.594 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17402ac31cbe0b01], Reason = [Started], Message = [Started container overcommit-3] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17402ac2c9509b5d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-4 to v125-worker2] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17402ac3446b605b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17402ac34515e395], Reason = [Created], Message = [Created container overcommit-4] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17402ac34fa48bc1], Reason = [Started], Message = [Started container overcommit-4] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17402ac2c99ca53d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-5 to v125-worker2] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17402ac3241d51b2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17402ac324dc71c9], Reason = [Created], Message = [Created container overcommit-5] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17402ac32ffc3f6b], Reason = [Started], Message = [Started container overcommit-5] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17402ac2c9dee230], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-6 to v125-worker2] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17402ac310862ef6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17402ac3111d16af], Reason = [Created], Message = [Created container overcommit-6] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17402ac31fea7828], Reason = [Started], Message = [Started container overcommit-6] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17402ac2ca1c8406], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-7 to v125-worker2] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17402ac332212a35], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17402ac332b751b6], Reason = [Created], Message = [Created container overcommit-7] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17402ac33dc81fec], Reason = [Started], Message = [Started container overcommit-7] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17402ac2ca61d431], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-8 to v125-worker] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17402ac3100c0a1f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17402ac310aef3d9], Reason = [Created], Message = [Created container overcommit-8] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17402ac31d26d284], Reason = [Started], Message = [Started container overcommit-8] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17402ac2caa5abca], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-9 to v125-worker] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17402ac322521c64], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.597 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17402ac322e96a9d], Reason = [Created], Message = [Created container overcommit-9] 02/03/23 01:01:27.597 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17402ac32f7a46cd], Reason = [Started], Message = [Started container overcommit-9] 02/03/23 01:01:27.597 STEP: Considering event: Type = [Warning], Name = [additional-pod.17402ac524c87bd7], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient ephemeral-storage. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.] 02/03/23 01:01:27.598 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 3 01:01:28.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2890" for this suite. 02/03/23 01:01:28.612 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","completed":6,"skipped":3908,"failed":0} ------------------------------ • [SLOW TEST] [11.228 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] test/e2e/scheduling/predicates.go:122 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 01:01:17.388 Feb 3 01:01:17.388: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/03/23 01:01:17.39 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 01:01:17.402 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 01:01:17.406 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 3 01:01:17.409: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 01:01:17.416: INFO: Waiting for terminating namespaces to be deleted... Feb 3 01:01:17.419: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 3 01:01:17.424: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:01:17.424: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:17.424: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:17.424: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:17.424: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:17.424: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:17.424: INFO: pod1 from sched-pred-75 started at 2023-02-03 01:01:11 +0000 UTC (1 container statuses recorded) Feb 3 01:01:17.424: INFO: Container agnhost ready: true, restart count 0 Feb 3 01:01:17.424: INFO: pod2 from sched-pred-75 started at 2023-02-03 01:01:13 +0000 UTC (1 container statuses recorded) Feb 3 01:01:17.424: INFO: Container agnhost ready: true, restart count 0 Feb 3 01:01:17.424: INFO: pod3 from sched-pred-75 started at 2023-02-03 01:01:15 +0000 UTC (1 container statuses recorded) Feb 3 01:01:17.424: INFO: Container agnhost ready: false, restart count 0 Feb 3 01:01:17.424: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 3 01:01:17.429: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:01:17.429: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:17.429: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:17.429: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:17.429: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:17.429: INFO: Container kube-proxy ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] test/e2e/scheduling/predicates.go:122 Feb 3 01:01:17.442: INFO: Pod create-loop-devs-d5nrm requesting local ephemeral resource =0 on Node v125-worker Feb 3 01:01:17.442: INFO: Pod create-loop-devs-tlwgp requesting local ephemeral resource =0 on Node v125-worker2 Feb 3 01:01:17.442: INFO: Pod kindnet-h8fbr requesting local ephemeral resource =0 on Node v125-worker2 Feb 3 01:01:17.442: INFO: Pod kindnet-xhfn8 requesting local ephemeral resource =0 on Node v125-worker Feb 3 01:01:17.442: INFO: Pod kube-proxy-bvl9x requesting local ephemeral resource =0 on Node v125-worker2 Feb 3 01:01:17.442: INFO: Pod kube-proxy-pxrcg requesting local ephemeral resource =0 on Node v125-worker Feb 3 01:01:17.443: INFO: Pod pod1 requesting local ephemeral resource =0 on Node v125-worker Feb 3 01:01:17.443: INFO: Pod pod2 requesting local ephemeral resource =0 on Node v125-worker Feb 3 01:01:17.443: INFO: Pod pod3 requesting local ephemeral resource =0 on Node v125-worker Feb 3 01:01:17.443: INFO: Using pod capacity: 47055905587 Feb 3 01:01:17.443: INFO: Node: v125-worker has local ephemeral resource allocatable: 470559055872 Feb 3 01:01:17.443: INFO: Node: v125-worker2 has local ephemeral resource allocatable: 470559055872 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one 02/03/23 01:01:17.443 Feb 3 01:01:17.531: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.17402ac2c84f15bd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-0 to v125-worker2] 02/03/23 01:01:27.589 STEP: Considering event: Type = [Normal], Name = [overcommit-0.17402ac2fbd6092f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.589 STEP: Considering event: Type = [Normal], Name = [overcommit-0.17402ac2fc91c8a6], Reason = [Created], Message = [Created container overcommit-0] 02/03/23 01:01:27.59 STEP: Considering event: Type = [Normal], Name = [overcommit-0.17402ac30c4bdbb5], Reason = [Started], Message = [Started container overcommit-0] 02/03/23 01:01:27.59 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17402ac2c887f12f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-1 to v125-worker] 02/03/23 01:01:27.59 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17402ac32e7a3c41], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.59 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17402ac32f7dd34e], Reason = [Created], Message = [Created container overcommit-1] 02/03/23 01:01:27.59 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17402ac337e8f5bd], Reason = [Started], Message = [Started container overcommit-1] 02/03/23 01:01:27.59 STEP: Considering event: Type = [Normal], Name = [overcommit-10.17402ac2cae76373], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-10 to v125-worker2] 02/03/23 01:01:27.59 STEP: Considering event: Type = [Normal], Name = [overcommit-10.17402ac32296d80f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.59 STEP: Considering event: Type = [Normal], Name = [overcommit-10.17402ac3235aa3b0], Reason = [Created], Message = [Created container overcommit-10] 02/03/23 01:01:27.59 STEP: Considering event: Type = [Normal], Name = [overcommit-10.17402ac32fd35dcd], Reason = [Started], Message = [Started container overcommit-10] 02/03/23 01:01:27.59 STEP: Considering event: Type = [Normal], Name = [overcommit-11.17402ac2cb345c75], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-11 to v125-worker] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-11.17402ac344cbee08], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-11.17402ac34577cefa], Reason = [Created], Message = [Created container overcommit-11] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-11.17402ac34ee82dcd], Reason = [Started], Message = [Started container overcommit-11] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-12.17402ac2cb593bf6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-12 to v125-worker] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-12.17402ac2fedc77c0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-12.17402ac2ff9bb0f0], Reason = [Created], Message = [Created container overcommit-12] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-12.17402ac30b48357f], Reason = [Started], Message = [Started container overcommit-12] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-13.17402ac2cba0fe4a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-13 to v125-worker2] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-13.17402ac2f284d713], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-13.17402ac2f33660a9], Reason = [Created], Message = [Created container overcommit-13] 02/03/23 01:01:27.591 STEP: Considering event: Type = [Normal], Name = [overcommit-13.17402ac3008de6ce], Reason = [Started], Message = [Started container overcommit-13] 02/03/23 01:01:27.592 STEP: Considering event: Type = [Normal], Name = [overcommit-14.17402ac2cbd6908a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-14 to v125-worker] 02/03/23 01:01:27.592 STEP: Considering event: Type = [Normal], Name = [overcommit-14.17402ac3242fceb4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.592 STEP: Considering event: Type = [Normal], Name = [overcommit-14.17402ac324e661ee], Reason = [Created], Message = [Created container overcommit-14] 02/03/23 01:01:27.592 STEP: Considering event: Type = [Normal], Name = [overcommit-14.17402ac32f820ca2], Reason = [Started], Message = [Started container overcommit-14] 02/03/23 01:01:27.592 STEP: Considering event: Type = [Normal], Name = [overcommit-15.17402ac2cc1201fe], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-15 to v125-worker2] 02/03/23 01:01:27.592 STEP: Considering event: Type = [Normal], Name = [overcommit-15.17402ac35c5dc8f2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.592 STEP: Considering event: Type = [Normal], Name = [overcommit-15.17402ac35d121be0], Reason = [Created], Message = [Created container overcommit-15] 02/03/23 01:01:27.592 STEP: Considering event: Type = [Normal], Name = [overcommit-15.17402ac36a2225e7], Reason = [Started], Message = [Started container overcommit-15] 02/03/23 01:01:27.592 STEP: Considering event: Type = [Normal], Name = [overcommit-16.17402ac2cc54c562], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-16 to v125-worker2] 02/03/23 01:01:27.592 STEP: Considering event: Type = [Normal], Name = [overcommit-16.17402ac36dc9d669], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-16.17402ac36e8a7781], Reason = [Created], Message = [Created container overcommit-16] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-16.17402ac37b76bbd1], Reason = [Started], Message = [Started container overcommit-16] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-17.17402ac2cc9041f5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-17 to v125-worker] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-17.17402ac36c3ef287], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-17.17402ac36d009cd8], Reason = [Created], Message = [Created container overcommit-17] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-17.17402ac37b72f564], Reason = [Started], Message = [Started container overcommit-17] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-18.17402ac2ccce81a1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-18 to v125-worker] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-18.17402ac35c901382], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-18.17402ac35d41e325], Reason = [Created], Message = [Created container overcommit-18] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-18.17402ac3691fb5f9], Reason = [Started], Message = [Started container overcommit-18] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-19.17402ac2cd12a1ec], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-19 to v125-worker] 02/03/23 01:01:27.593 STEP: Considering event: Type = [Normal], Name = [overcommit-19.17402ac35c46a6e5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.594 STEP: Considering event: Type = [Normal], Name = [overcommit-19.17402ac35d02b337], Reason = [Created], Message = [Created container overcommit-19] 02/03/23 01:01:27.594 STEP: Considering event: Type = [Normal], Name = [overcommit-19.17402ac36958ab4c], Reason = [Started], Message = [Started container overcommit-19] 02/03/23 01:01:27.594 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17402ac2c8bafe89], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-2 to v125-worker2] 02/03/23 01:01:27.594 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17402ac32e757903], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.594 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17402ac32f7dfa5e], Reason = [Created], Message = [Created container overcommit-2] 02/03/23 01:01:27.594 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17402ac337801191], Reason = [Started], Message = [Started container overcommit-2] 02/03/23 01:01:27.594 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17402ac2c8ff8233], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-3 to v125-worker] 02/03/23 01:01:27.594 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17402ac30f54e27f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.594 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17402ac30ffdd085], Reason = [Created], Message = [Created container overcommit-3] 02/03/23 01:01:27.594 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17402ac31cbe0b01], Reason = [Started], Message = [Started container overcommit-3] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17402ac2c9509b5d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-4 to v125-worker2] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17402ac3446b605b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17402ac34515e395], Reason = [Created], Message = [Created container overcommit-4] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17402ac34fa48bc1], Reason = [Started], Message = [Started container overcommit-4] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17402ac2c99ca53d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-5 to v125-worker2] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17402ac3241d51b2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17402ac324dc71c9], Reason = [Created], Message = [Created container overcommit-5] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17402ac32ffc3f6b], Reason = [Started], Message = [Started container overcommit-5] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17402ac2c9dee230], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-6 to v125-worker2] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17402ac310862ef6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.595 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17402ac3111d16af], Reason = [Created], Message = [Created container overcommit-6] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17402ac31fea7828], Reason = [Started], Message = [Started container overcommit-6] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17402ac2ca1c8406], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-7 to v125-worker2] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17402ac332212a35], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17402ac332b751b6], Reason = [Created], Message = [Created container overcommit-7] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17402ac33dc81fec], Reason = [Started], Message = [Started container overcommit-7] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17402ac2ca61d431], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-8 to v125-worker] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17402ac3100c0a1f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17402ac310aef3d9], Reason = [Created], Message = [Created container overcommit-8] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17402ac31d26d284], Reason = [Started], Message = [Started container overcommit-8] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17402ac2caa5abca], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2890/overcommit-9 to v125-worker] 02/03/23 01:01:27.596 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17402ac322521c64], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/03/23 01:01:27.597 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17402ac322e96a9d], Reason = [Created], Message = [Created container overcommit-9] 02/03/23 01:01:27.597 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17402ac32f7a46cd], Reason = [Started], Message = [Started container overcommit-9] 02/03/23 01:01:27.597 STEP: Considering event: Type = [Warning], Name = [additional-pod.17402ac524c87bd7], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient ephemeral-storage. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.] 02/03/23 01:01:27.598 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 3 01:01:28.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2890" for this suite. 02/03/23 01:01:28.612 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones [Serial] test/e2e/scheduling/ubernetes_lite.go:81 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 01:01:28.633 Feb 3 01:01:28.633: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename multi-az 02/03/23 01:01:28.636 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 01:01:28.647 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 01:01:28.65 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:51 STEP: Checking for multi-zone cluster. Schedulable zone count = 0 02/03/23 01:01:28.658 Feb 3 01:01:28.658: INFO: Schedulable zone count is 0, only run for multi-zone clusters, skipping test [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/framework.go:187 Feb 3 01:01:28.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-9486" for this suite. 02/03/23 01:01:28.662 [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:72 ------------------------------ S [SKIPPED] [0.033 seconds] [sig-scheduling] Multi-AZ Clusters [BeforeEach] test/e2e/scheduling/ubernetes_lite.go:51 should spread the pods of a replication controller across zones [Serial] test/e2e/scheduling/ubernetes_lite.go:81 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 01:01:28.633 Feb 3 01:01:28.633: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename multi-az 02/03/23 01:01:28.636 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 01:01:28.647 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 01:01:28.65 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:51 STEP: Checking for multi-zone cluster. Schedulable zone count = 0 02/03/23 01:01:28.658 Feb 3 01:01:28.658: INFO: Schedulable zone count is 0, only run for multi-zone clusters, skipping test [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/framework.go:187 Feb 3 01:01:28.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-9486" for this suite. 02/03/23 01:01:28.662 [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:72 << End Captured GinkgoWriter Output Schedulable zone count is 0, only run for multi-zone clusters, skipping test In [BeforeEach] at: test/e2e/scheduling/ubernetes_lite.go:61 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching test/e2e/scheduling/predicates.go:493 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 01:01:28.678 Feb 3 01:01:28.678: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/03/23 01:01:28.679 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 01:01:28.689 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 01:01:28.693 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 3 01:01:28.696: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 01:01:28.704: INFO: Waiting for terminating namespaces to be deleted... Feb 3 01:01:28.707: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 3 01:01:28.715: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:28.715: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:28.715: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:28.715: INFO: overcommit-1 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container overcommit-1 ready: true, restart count 0 Feb 3 01:01:28.715: INFO: overcommit-11 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container overcommit-11 ready: true, restart count 0 Feb 3 01:01:28.715: INFO: overcommit-12 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container overcommit-12 ready: true, restart count 0 Feb 3 01:01:28.715: INFO: overcommit-14 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container overcommit-14 ready: true, restart count 0 Feb 3 01:01:28.715: INFO: overcommit-17 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container overcommit-17 ready: true, restart count 0 Feb 3 01:01:28.715: INFO: overcommit-18 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container overcommit-18 ready: true, restart count 0 Feb 3 01:01:28.715: INFO: overcommit-19 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container overcommit-19 ready: true, restart count 0 Feb 3 01:01:28.715: INFO: overcommit-3 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container overcommit-3 ready: true, restart count 0 Feb 3 01:01:28.715: INFO: overcommit-8 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container overcommit-8 ready: true, restart count 0 Feb 3 01:01:28.715: INFO: overcommit-9 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container overcommit-9 ready: true, restart count 0 Feb 3 01:01:28.715: INFO: pod2 from sched-pred-75 started at 2023-02-03 01:01:13 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container agnhost ready: true, restart count 0 Feb 3 01:01:28.715: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 3 01:01:28.723: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:28.723: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:28.723: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:28.723: INFO: overcommit-0 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container overcommit-0 ready: true, restart count 0 Feb 3 01:01:28.723: INFO: overcommit-10 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container overcommit-10 ready: true, restart count 0 Feb 3 01:01:28.723: INFO: overcommit-13 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container overcommit-13 ready: true, restart count 0 Feb 3 01:01:28.723: INFO: overcommit-15 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container overcommit-15 ready: true, restart count 0 Feb 3 01:01:28.723: INFO: overcommit-16 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container overcommit-16 ready: true, restart count 0 Feb 3 01:01:28.723: INFO: overcommit-2 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container overcommit-2 ready: true, restart count 0 Feb 3 01:01:28.723: INFO: overcommit-4 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container overcommit-4 ready: true, restart count 0 Feb 3 01:01:28.723: INFO: overcommit-5 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container overcommit-5 ready: true, restart count 0 Feb 3 01:01:28.723: INFO: overcommit-6 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container overcommit-6 ready: true, restart count 0 Feb 3 01:01:28.723: INFO: overcommit-7 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container overcommit-7 ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching test/e2e/scheduling/predicates.go:493 STEP: Trying to schedule Pod with nonempty NodeSelector. 02/03/23 01:01:28.723 STEP: Considering event: Type = [Warning], Name = [restricted-pod.17402ac6d0e99169], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.] 02/03/23 01:01:34.781 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 3 01:01:35.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4426" for this suite. 02/03/23 01:01:35.786 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","completed":7,"skipped":4274,"failed":0} ------------------------------ • [SLOW TEST] [7.113 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that NodeAffinity is respected if not matching test/e2e/scheduling/predicates.go:493 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 01:01:28.678 Feb 3 01:01:28.678: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/03/23 01:01:28.679 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 01:01:28.689 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 01:01:28.693 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 3 01:01:28.696: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 01:01:28.704: INFO: Waiting for terminating namespaces to be deleted... Feb 3 01:01:28.707: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 3 01:01:28.715: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:28.715: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:28.715: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:28.715: INFO: overcommit-1 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container overcommit-1 ready: true, restart count 0 Feb 3 01:01:28.715: INFO: overcommit-11 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container overcommit-11 ready: true, restart count 0 Feb 3 01:01:28.715: INFO: overcommit-12 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container overcommit-12 ready: true, restart count 0 Feb 3 01:01:28.715: INFO: overcommit-14 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container overcommit-14 ready: true, restart count 0 Feb 3 01:01:28.715: INFO: overcommit-17 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container overcommit-17 ready: true, restart count 0 Feb 3 01:01:28.715: INFO: overcommit-18 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container overcommit-18 ready: true, restart count 0 Feb 3 01:01:28.715: INFO: overcommit-19 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container overcommit-19 ready: true, restart count 0 Feb 3 01:01:28.715: INFO: overcommit-3 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container overcommit-3 ready: true, restart count 0 Feb 3 01:01:28.715: INFO: overcommit-8 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container overcommit-8 ready: true, restart count 0 Feb 3 01:01:28.715: INFO: overcommit-9 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container overcommit-9 ready: true, restart count 0 Feb 3 01:01:28.715: INFO: pod2 from sched-pred-75 started at 2023-02-03 01:01:13 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.715: INFO: Container agnhost ready: true, restart count 0 Feb 3 01:01:28.715: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 3 01:01:28.723: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:28.723: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:28.723: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:28.723: INFO: overcommit-0 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container overcommit-0 ready: true, restart count 0 Feb 3 01:01:28.723: INFO: overcommit-10 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container overcommit-10 ready: true, restart count 0 Feb 3 01:01:28.723: INFO: overcommit-13 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container overcommit-13 ready: true, restart count 0 Feb 3 01:01:28.723: INFO: overcommit-15 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container overcommit-15 ready: true, restart count 0 Feb 3 01:01:28.723: INFO: overcommit-16 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container overcommit-16 ready: true, restart count 0 Feb 3 01:01:28.723: INFO: overcommit-2 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container overcommit-2 ready: true, restart count 0 Feb 3 01:01:28.723: INFO: overcommit-4 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container overcommit-4 ready: true, restart count 0 Feb 3 01:01:28.723: INFO: overcommit-5 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container overcommit-5 ready: true, restart count 0 Feb 3 01:01:28.723: INFO: overcommit-6 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container overcommit-6 ready: true, restart count 0 Feb 3 01:01:28.723: INFO: overcommit-7 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:28.723: INFO: Container overcommit-7 ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching test/e2e/scheduling/predicates.go:493 STEP: Trying to schedule Pod with nonempty NodeSelector. 02/03/23 01:01:28.723 STEP: Considering event: Type = [Warning], Name = [restricted-pod.17402ac6d0e99169], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.] 02/03/23 01:01:34.781 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 3 01:01:35.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4426" for this suite. 02/03/23 01:01:35.786 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for test/e2e/scheduling/predicates.go:271 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 01:01:35.803 Feb 3 01:01:35.803: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/03/23 01:01:35.804 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 01:01:35.815 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 01:01:35.819 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 3 01:01:35.823: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 01:01:35.831: INFO: Waiting for terminating namespaces to be deleted... Feb 3 01:01:35.834: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 3 01:01:35.841: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.841: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:35.841: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.841: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:35.841: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.841: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:35.841: INFO: overcommit-1 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.841: INFO: Container overcommit-1 ready: true, restart count 0 Feb 3 01:01:35.841: INFO: overcommit-11 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.841: INFO: Container overcommit-11 ready: true, restart count 0 Feb 3 01:01:35.841: INFO: overcommit-12 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.841: INFO: Container overcommit-12 ready: true, restart count 0 Feb 3 01:01:35.841: INFO: overcommit-17 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.841: INFO: Container overcommit-17 ready: true, restart count 0 Feb 3 01:01:35.841: INFO: overcommit-19 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.841: INFO: Container overcommit-19 ready: true, restart count 0 Feb 3 01:01:35.841: INFO: overcommit-3 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.841: INFO: Container overcommit-3 ready: true, restart count 0 Feb 3 01:01:35.841: INFO: overcommit-9 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.841: INFO: Container overcommit-9 ready: true, restart count 0 Feb 3 01:01:35.841: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 3 01:01:35.849: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:35.849: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:35.849: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:35.849: INFO: overcommit-0 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container overcommit-0 ready: true, restart count 0 Feb 3 01:01:35.849: INFO: overcommit-10 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container overcommit-10 ready: true, restart count 0 Feb 3 01:01:35.849: INFO: overcommit-15 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container overcommit-15 ready: true, restart count 0 Feb 3 01:01:35.849: INFO: overcommit-16 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container overcommit-16 ready: true, restart count 0 Feb 3 01:01:35.849: INFO: overcommit-4 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container overcommit-4 ready: true, restart count 0 Feb 3 01:01:35.849: INFO: overcommit-5 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container overcommit-5 ready: true, restart count 0 Feb 3 01:01:35.849: INFO: overcommit-6 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container overcommit-6 ready: true, restart count 0 Feb 3 01:01:35.849: INFO: overcommit-7 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container overcommit-7 ready: false, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run test/e2e/scheduling/predicates.go:216 STEP: Add RuntimeClass and fake resource 02/03/23 01:01:41.884 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/03/23 01:01:41.884 Feb 3 01:01:41.892: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-6954" to be "running" Feb 3 01:01:41.896: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.440202ms Feb 3 01:01:43.900: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007736766s Feb 3 01:01:43.900: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 01:01:43.903 Feb 3 01:01:43.928: INFO: Unexpected error: failed to create RuntimeClass resource: <*errors.StatusError | 0xc0037de5a0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "runtimeclasses.node.k8s.io \"test-handler\" already exists", Reason: "AlreadyExists", Details: { Name: "test-handler", Group: "node.k8s.io", Kind: "runtimeclasses", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 409, }, } Feb 3 01:01:43.928: FAIL: failed to create RuntimeClass resource: runtimeclasses.node.k8s.io "test-handler" already exists Full Stack Trace k8s.io/kubernetes/test/e2e/scheduling.glob..func4.4.1() test/e2e/scheduling/predicates.go:248 +0x745 [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run test/e2e/scheduling/predicates.go:251 STEP: Remove fake resource and RuntimeClass 02/03/23 01:01:43.929 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 STEP: Collecting events from namespace "sched-pred-6954". 02/03/23 01:01:43.943 STEP: Found 4 events. 02/03/23 01:01:43.946 Feb 3 01:01:43.946: INFO: At 2023-02-03 01:01:41 +0000 UTC - event for without-label: {default-scheduler } Scheduled: Successfully assigned sched-pred-6954/without-label to v125-worker2 Feb 3 01:01:43.946: INFO: At 2023-02-03 01:01:42 +0000 UTC - event for without-label: {kubelet v125-worker2} Pulled: Container image "k8s.gcr.io/pause:3.8" already present on machine Feb 3 01:01:43.946: INFO: At 2023-02-03 01:01:42 +0000 UTC - event for without-label: {kubelet v125-worker2} Created: Created container without-label Feb 3 01:01:43.946: INFO: At 2023-02-03 01:01:42 +0000 UTC - event for without-label: {kubelet v125-worker2} Started: Started container without-label Feb 3 01:01:43.949: INFO: POD NODE PHASE GRACE CONDITIONS Feb 3 01:01:43.949: INFO: Feb 3 01:01:43.953: INFO: Logging node info for node v125-control-plane Feb 3 01:01:43.957: INFO: Node Info: &Node{ObjectMeta:{v125-control-plane 9017473c-7a26-48da-8393-895a56c2149c 1460275 0 2023-01-23 08:54:35 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v125-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-23 08:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-01-23 08:54:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-01-23 08:54:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-02-03 01:01:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v125/v125-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-03 01:01:16 +0000 UTC,LastTransitionTime:2023-01-23 08:54:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-03 01:01:16 +0000 UTC,LastTransitionTime:2023-01-23 08:54:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-03 01:01:16 +0000 UTC,LastTransitionTime:2023-01-23 08:54:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-03 01:01:16 +0000 UTC,LastTransitionTime:2023-01-23 08:54:59 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.5,},NodeAddress{Type:Hostname,Address:v125-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7a268254b9494affa6aa2fe9f6d1ec0a,SystemUUID:1fdc787b-27ad-4baf-8532-4f2b5ee09697,BootID:6b29b1db-499d-4992-b2d7-2091a7eaafed,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.25.2,KubeProxyVersion:v1.25.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:d38cf34a86c8798fbd7e7dce374a36ef6da7a1a2f88bf384e66c239d527493d9 registry.k8s.io/kube-apiserver:v1.25.2],SizeBytes:76513774,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:e3c94ddacb1a39f08a66d844b70d29c07327136b7578a3e512a0dde02509bd44 registry.k8s.io/kube-controller-manager:v1.25.2],SizeBytes:64499324,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:63c431d51c9715cd19db89455c69e4277d7282f0cff1fe137170908b4d1dcad1 registry.k8s.io/kube-proxy:v1.25.2],SizeBytes:63270397,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:db84dea8fb911257d8aa41437db54d44dba91d4102ec1872673e7daec026226d registry.k8s.io/kube-scheduler:v1.25.2],SizeBytes:51921020,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d k8s.gcr.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 3 01:01:43.957: INFO: Logging kubelet events for node v125-control-plane Feb 3 01:01:43.960: INFO: Logging pods the kubelet thinks is on node v125-control-plane Feb 3 01:01:43.988: INFO: create-loop-devs-v247k started at 2023-01-23 08:55:01 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:43.988: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:43.988: INFO: kube-controller-manager-v125-control-plane started at 2023-01-23 08:54:39 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:43.988: INFO: Container kube-controller-manager ready: true, restart count 0 Feb 3 01:01:43.988: INFO: local-path-provisioner-684f458cdd-2pgzv started at 2023-01-23 08:54:59 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:43.988: INFO: Container local-path-provisioner ready: true, restart count 0 Feb 3 01:01:43.988: INFO: coredns-565d847f94-l677v started at 2023-01-23 08:54:59 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:43.988: INFO: Container coredns ready: true, restart count 0 Feb 3 01:01:43.988: INFO: kindnet-xb2v4 started at 2023-01-23 08:54:51 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:43.988: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:43.988: INFO: kube-proxy-klz6s started at 2023-01-23 08:54:51 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:43.988: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:43.988: INFO: coredns-565d847f94-bdmgp started at 2023-01-23 08:54:59 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:43.988: INFO: Container coredns ready: true, restart count 0 Feb 3 01:01:43.988: INFO: etcd-v125-control-plane started at 2023-01-23 08:54:39 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:43.988: INFO: Container etcd ready: true, restart count 0 Feb 3 01:01:43.988: INFO: kube-apiserver-v125-control-plane started at 2023-01-23 08:54:39 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:43.988: INFO: Container kube-apiserver ready: true, restart count 0 Feb 3 01:01:43.988: INFO: kube-scheduler-v125-control-plane started at 2023-01-23 08:54:39 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:43.988: INFO: Container kube-scheduler ready: true, restart count 0 Feb 3 01:01:44.073: INFO: Latency metrics for node v125-control-plane Feb 3 01:01:44.073: INFO: Logging node info for node v125-worker Feb 3 01:01:44.077: INFO: Node Info: &Node{ObjectMeta:{v125-worker 9100d805-c226-4d6e-86b4-1770ab4ad4f7 1460279 0 2023-01-23 08:54:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v125-worker kubernetes.io/os:linux topology.hostpath.csi/node:v125-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-01-23 08:54:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-23 08:54:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-01-23 08:55:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {e2e.test Update v1 2023-02-03 01:00:44 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}} status} {kubelet Update v1 2023-02-03 01:01:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v125/v125-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-03 01:01:09 +0000 UTC,LastTransitionTime:2023-01-23 08:54:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-03 01:01:09 +0000 UTC,LastTransitionTime:2023-01-23 08:54:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-03 01:01:09 +0000 UTC,LastTransitionTime:2023-01-23 08:54:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-03 01:01:09 +0000 UTC,LastTransitionTime:2023-01-23 08:55:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.10,},NodeAddress{Type:Hostname,Address:v125-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:04ca4fce11244dcf969a5942667ed21f,SystemUUID:8f0c0bed-0920-4a54-863b-d463c0d0f9e6,BootID:6b29b1db-499d-4992-b2d7-2091a7eaafed,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.25.2,KubeProxyVersion:v1.25.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:7ed3bfb1429e97f721cbd8b2953ffb1f0186e89c1c99ee0e919d563b0caa81d2 k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.3],SizeBytes:151196506,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c k8s.gcr.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:d38cf34a86c8798fbd7e7dce374a36ef6da7a1a2f88bf384e66c239d527493d9 registry.k8s.io/kube-apiserver:v1.25.2],SizeBytes:76513774,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:e3c94ddacb1a39f08a66d844b70d29c07327136b7578a3e512a0dde02509bd44 registry.k8s.io/kube-controller-manager:v1.25.2],SizeBytes:64499324,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:63c431d51c9715cd19db89455c69e4277d7282f0cff1fe137170908b4d1dcad1 registry.k8s.io/kube-proxy:v1.25.2],SizeBytes:63270397,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:db84dea8fb911257d8aa41437db54d44dba91d4102ec1872673e7daec026226d registry.k8s.io/kube-scheduler:v1.25.2],SizeBytes:51921020,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 k8s.gcr.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:6477988532358148d2e98f7c747db4e9250bbc7ad2664bf666348abf9ee1f5aa k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0],SizeBytes:22728994,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6e0546563b18872b0aa0cad7255a26bb9a87cb879b7fc3e2383c867ef4f706fb k8s.gcr.io/sig-storage/csi-resizer:v1.3.0],SizeBytes:21671340,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:80dec81b679a733fda448be92a2331150d99095947d04003ecff3dbd7f2a476a k8s.gcr.io/sig-storage/csi-attacher:v3.3.0],SizeBytes:21444261,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 k8s.gcr.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:f9bcee63734b7b01555ee8fc8fb01ac2922478b2c8934bf8d468dd2916edc405 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0],SizeBytes:8582494,},ContainerImage{Names:[k8s.gcr.io/build-image/distroless-iptables@sha256:38e6b091d238094f081efad3e2b362e6480b2156f5f4fba6ea46835ecdcd47e2 k8s.gcr.io/build-image/distroless-iptables:v0.1.1],SizeBytes:7634231,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:55d0552eb6538050ea7741e46b35d27eccffeeaed7010f9f2bad0a89c149bc6f k8s.gcr.io/e2e-test-images/nginx:1.15-2],SizeBytes:7000509,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d k8s.gcr.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 3 01:01:44.077: INFO: Logging kubelet events for node v125-worker Feb 3 01:01:44.080: INFO: Logging pods the kubelet thinks is on node v125-worker Feb 3 01:01:44.102: INFO: kube-proxy-pxrcg started at 2023-01-23 08:54:56 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:44.102: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:44.103: INFO: kindnet-xhfn8 started at 2023-01-23 08:54:56 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:44.103: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:44.103: INFO: create-loop-devs-d5nrm started at 2023-01-23 08:55:01 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:44.103: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:44.327: INFO: Latency metrics for node v125-worker Feb 3 01:01:44.327: INFO: Logging node info for node v125-worker2 Feb 3 01:01:44.331: INFO: Node Info: &Node{ObjectMeta:{v125-worker2 3946dcd4-bff4-4f20-b391-423e1b2e731a 1460729 0 2023-01-23 08:54:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v125-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:v125-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-01-23 08:54:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-23 08:54:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-01-23 08:55:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-02-03 01:01:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {e2e.test Update v1 2023-02-03 01:01:43 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v125/v125-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-03 01:01:05 +0000 UTC,LastTransitionTime:2023-01-23 08:54:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-03 01:01:05 +0000 UTC,LastTransitionTime:2023-01-23 08:54:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-03 01:01:05 +0000 UTC,LastTransitionTime:2023-01-23 08:54:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-03 01:01:05 +0000 UTC,LastTransitionTime:2023-01-23 08:55:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.13,},NodeAddress{Type:Hostname,Address:v125-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c2e62a0f27f44f63af0b5bcd0c4b46ec,SystemUUID:7633b4e5-cb2e-46ae-a67f-c14e00012322,BootID:6b29b1db-499d-4992-b2d7-2091a7eaafed,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.25.2,KubeProxyVersion:v1.25.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c k8s.gcr.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[k8s.gcr.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 k8s.gcr.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:d38cf34a86c8798fbd7e7dce374a36ef6da7a1a2f88bf384e66c239d527493d9 registry.k8s.io/kube-apiserver:v1.25.2],SizeBytes:76513774,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:e3c94ddacb1a39f08a66d844b70d29c07327136b7578a3e512a0dde02509bd44 registry.k8s.io/kube-controller-manager:v1.25.2],SizeBytes:64499324,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:63c431d51c9715cd19db89455c69e4277d7282f0cff1fe137170908b4d1dcad1 registry.k8s.io/kube-proxy:v1.25.2],SizeBytes:63270397,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:db84dea8fb911257d8aa41437db54d44dba91d4102ec1872673e7daec026226d registry.k8s.io/kube-scheduler:v1.25.2],SizeBytes:51921020,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 k8s.gcr.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:f9c93b92b6ff750b41a93c4e4fe0bfe384597aeb841e2539d5444815c55b2d8f k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5],SizeBytes:24316368,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:6477988532358148d2e98f7c747db4e9250bbc7ad2664bf666348abf9ee1f5aa k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0],SizeBytes:22728994,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6e0546563b18872b0aa0cad7255a26bb9a87cb879b7fc3e2383c867ef4f706fb k8s.gcr.io/sig-storage/csi-resizer:v1.3.0],SizeBytes:21671340,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:80dec81b679a733fda448be92a2331150d99095947d04003ecff3dbd7f2a476a k8s.gcr.io/sig-storage/csi-attacher:v3.3.0],SizeBytes:21444261,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 k8s.gcr.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:f9bcee63734b7b01555ee8fc8fb01ac2922478b2c8934bf8d468dd2916edc405 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0],SizeBytes:8582494,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:55d0552eb6538050ea7741e46b35d27eccffeeaed7010f9f2bad0a89c149bc6f k8s.gcr.io/e2e-test-images/nginx:1.15-2],SizeBytes:7000509,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d k8s.gcr.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 3 01:01:44.332: INFO: Logging kubelet events for node v125-worker2 Feb 3 01:01:44.335: INFO: Logging pods the kubelet thinks is on node v125-worker2 Feb 3 01:01:44.358: INFO: kindnet-h8fbr started at 2023-01-23 08:54:56 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:44.359: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:44.359: INFO: kube-proxy-bvl9x started at 2023-01-23 08:54:56 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:44.359: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:44.359: INFO: create-loop-devs-tlwgp started at 2023-01-23 08:55:01 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:44.359: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:44.562: INFO: Latency metrics for node v125-worker2 Feb 3 01:01:44.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6954" for this suite. 02/03/23 01:01:44.566 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"FAILED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","completed":7,"skipped":4425,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for"]} ------------------------------ • [FAILED] [8.768 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run [BeforeEach] test/e2e/scheduling/predicates.go:216 verify pod overhead is accounted for test/e2e/scheduling/predicates.go:271 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 01:01:35.803 Feb 3 01:01:35.803: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/03/23 01:01:35.804 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 01:01:35.815 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 01:01:35.819 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 3 01:01:35.823: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 01:01:35.831: INFO: Waiting for terminating namespaces to be deleted... Feb 3 01:01:35.834: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 3 01:01:35.841: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.841: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:35.841: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.841: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:35.841: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.841: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:35.841: INFO: overcommit-1 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.841: INFO: Container overcommit-1 ready: true, restart count 0 Feb 3 01:01:35.841: INFO: overcommit-11 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.841: INFO: Container overcommit-11 ready: true, restart count 0 Feb 3 01:01:35.841: INFO: overcommit-12 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.841: INFO: Container overcommit-12 ready: true, restart count 0 Feb 3 01:01:35.841: INFO: overcommit-17 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.841: INFO: Container overcommit-17 ready: true, restart count 0 Feb 3 01:01:35.841: INFO: overcommit-19 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.841: INFO: Container overcommit-19 ready: true, restart count 0 Feb 3 01:01:35.841: INFO: overcommit-3 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.841: INFO: Container overcommit-3 ready: true, restart count 0 Feb 3 01:01:35.841: INFO: overcommit-9 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.841: INFO: Container overcommit-9 ready: true, restart count 0 Feb 3 01:01:35.841: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 3 01:01:35.849: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:35.849: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:35.849: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:35.849: INFO: overcommit-0 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container overcommit-0 ready: true, restart count 0 Feb 3 01:01:35.849: INFO: overcommit-10 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container overcommit-10 ready: true, restart count 0 Feb 3 01:01:35.849: INFO: overcommit-15 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container overcommit-15 ready: true, restart count 0 Feb 3 01:01:35.849: INFO: overcommit-16 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container overcommit-16 ready: true, restart count 0 Feb 3 01:01:35.849: INFO: overcommit-4 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container overcommit-4 ready: true, restart count 0 Feb 3 01:01:35.849: INFO: overcommit-5 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container overcommit-5 ready: true, restart count 0 Feb 3 01:01:35.849: INFO: overcommit-6 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container overcommit-6 ready: true, restart count 0 Feb 3 01:01:35.849: INFO: overcommit-7 from sched-pred-2890 started at 2023-02-03 01:01:17 +0000 UTC (1 container statuses recorded) Feb 3 01:01:35.849: INFO: Container overcommit-7 ready: false, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run test/e2e/scheduling/predicates.go:216 STEP: Add RuntimeClass and fake resource 02/03/23 01:01:41.884 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/03/23 01:01:41.884 Feb 3 01:01:41.892: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-6954" to be "running" Feb 3 01:01:41.896: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.440202ms Feb 3 01:01:43.900: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007736766s Feb 3 01:01:43.900: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 01:01:43.903 Feb 3 01:01:43.928: INFO: Unexpected error: failed to create RuntimeClass resource: <*errors.StatusError | 0xc0037de5a0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "runtimeclasses.node.k8s.io \"test-handler\" already exists", Reason: "AlreadyExists", Details: { Name: "test-handler", Group: "node.k8s.io", Kind: "runtimeclasses", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 409, }, } Feb 3 01:01:43.928: FAIL: failed to create RuntimeClass resource: runtimeclasses.node.k8s.io "test-handler" already exists Full Stack Trace k8s.io/kubernetes/test/e2e/scheduling.glob..func4.4.1() test/e2e/scheduling/predicates.go:248 +0x745 [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run test/e2e/scheduling/predicates.go:251 STEP: Remove fake resource and RuntimeClass 02/03/23 01:01:43.929 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 STEP: Collecting events from namespace "sched-pred-6954". 02/03/23 01:01:43.943 STEP: Found 4 events. 02/03/23 01:01:43.946 Feb 3 01:01:43.946: INFO: At 2023-02-03 01:01:41 +0000 UTC - event for without-label: {default-scheduler } Scheduled: Successfully assigned sched-pred-6954/without-label to v125-worker2 Feb 3 01:01:43.946: INFO: At 2023-02-03 01:01:42 +0000 UTC - event for without-label: {kubelet v125-worker2} Pulled: Container image "k8s.gcr.io/pause:3.8" already present on machine Feb 3 01:01:43.946: INFO: At 2023-02-03 01:01:42 +0000 UTC - event for without-label: {kubelet v125-worker2} Created: Created container without-label Feb 3 01:01:43.946: INFO: At 2023-02-03 01:01:42 +0000 UTC - event for without-label: {kubelet v125-worker2} Started: Started container without-label Feb 3 01:01:43.949: INFO: POD NODE PHASE GRACE CONDITIONS Feb 3 01:01:43.949: INFO: Feb 3 01:01:43.953: INFO: Logging node info for node v125-control-plane Feb 3 01:01:43.957: INFO: Node Info: &Node{ObjectMeta:{v125-control-plane 9017473c-7a26-48da-8393-895a56c2149c 1460275 0 2023-01-23 08:54:35 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v125-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-23 08:54:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-01-23 08:54:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-01-23 08:54:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-02-03 01:01:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v125/v125-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-03 01:01:16 +0000 UTC,LastTransitionTime:2023-01-23 08:54:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-03 01:01:16 +0000 UTC,LastTransitionTime:2023-01-23 08:54:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-03 01:01:16 +0000 UTC,LastTransitionTime:2023-01-23 08:54:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-03 01:01:16 +0000 UTC,LastTransitionTime:2023-01-23 08:54:59 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.5,},NodeAddress{Type:Hostname,Address:v125-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7a268254b9494affa6aa2fe9f6d1ec0a,SystemUUID:1fdc787b-27ad-4baf-8532-4f2b5ee09697,BootID:6b29b1db-499d-4992-b2d7-2091a7eaafed,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.25.2,KubeProxyVersion:v1.25.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:d38cf34a86c8798fbd7e7dce374a36ef6da7a1a2f88bf384e66c239d527493d9 registry.k8s.io/kube-apiserver:v1.25.2],SizeBytes:76513774,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:e3c94ddacb1a39f08a66d844b70d29c07327136b7578a3e512a0dde02509bd44 registry.k8s.io/kube-controller-manager:v1.25.2],SizeBytes:64499324,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:63c431d51c9715cd19db89455c69e4277d7282f0cff1fe137170908b4d1dcad1 registry.k8s.io/kube-proxy:v1.25.2],SizeBytes:63270397,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:db84dea8fb911257d8aa41437db54d44dba91d4102ec1872673e7daec026226d registry.k8s.io/kube-scheduler:v1.25.2],SizeBytes:51921020,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d k8s.gcr.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 3 01:01:43.957: INFO: Logging kubelet events for node v125-control-plane Feb 3 01:01:43.960: INFO: Logging pods the kubelet thinks is on node v125-control-plane Feb 3 01:01:43.988: INFO: create-loop-devs-v247k started at 2023-01-23 08:55:01 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:43.988: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:43.988: INFO: kube-controller-manager-v125-control-plane started at 2023-01-23 08:54:39 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:43.988: INFO: Container kube-controller-manager ready: true, restart count 0 Feb 3 01:01:43.988: INFO: local-path-provisioner-684f458cdd-2pgzv started at 2023-01-23 08:54:59 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:43.988: INFO: Container local-path-provisioner ready: true, restart count 0 Feb 3 01:01:43.988: INFO: coredns-565d847f94-l677v started at 2023-01-23 08:54:59 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:43.988: INFO: Container coredns ready: true, restart count 0 Feb 3 01:01:43.988: INFO: kindnet-xb2v4 started at 2023-01-23 08:54:51 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:43.988: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:43.988: INFO: kube-proxy-klz6s started at 2023-01-23 08:54:51 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:43.988: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:43.988: INFO: coredns-565d847f94-bdmgp started at 2023-01-23 08:54:59 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:43.988: INFO: Container coredns ready: true, restart count 0 Feb 3 01:01:43.988: INFO: etcd-v125-control-plane started at 2023-01-23 08:54:39 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:43.988: INFO: Container etcd ready: true, restart count 0 Feb 3 01:01:43.988: INFO: kube-apiserver-v125-control-plane started at 2023-01-23 08:54:39 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:43.988: INFO: Container kube-apiserver ready: true, restart count 0 Feb 3 01:01:43.988: INFO: kube-scheduler-v125-control-plane started at 2023-01-23 08:54:39 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:43.988: INFO: Container kube-scheduler ready: true, restart count 0 Feb 3 01:01:44.073: INFO: Latency metrics for node v125-control-plane Feb 3 01:01:44.073: INFO: Logging node info for node v125-worker Feb 3 01:01:44.077: INFO: Node Info: &Node{ObjectMeta:{v125-worker 9100d805-c226-4d6e-86b4-1770ab4ad4f7 1460279 0 2023-01-23 08:54:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v125-worker kubernetes.io/os:linux topology.hostpath.csi/node:v125-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-01-23 08:54:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-23 08:54:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-01-23 08:55:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {e2e.test Update v1 2023-02-03 01:00:44 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}} status} {kubelet Update v1 2023-02-03 01:01:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v125/v125-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-03 01:01:09 +0000 UTC,LastTransitionTime:2023-01-23 08:54:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-03 01:01:09 +0000 UTC,LastTransitionTime:2023-01-23 08:54:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-03 01:01:09 +0000 UTC,LastTransitionTime:2023-01-23 08:54:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-03 01:01:09 +0000 UTC,LastTransitionTime:2023-01-23 08:55:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.10,},NodeAddress{Type:Hostname,Address:v125-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:04ca4fce11244dcf969a5942667ed21f,SystemUUID:8f0c0bed-0920-4a54-863b-d463c0d0f9e6,BootID:6b29b1db-499d-4992-b2d7-2091a7eaafed,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.25.2,KubeProxyVersion:v1.25.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:7ed3bfb1429e97f721cbd8b2953ffb1f0186e89c1c99ee0e919d563b0caa81d2 k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.3],SizeBytes:151196506,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c k8s.gcr.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:d38cf34a86c8798fbd7e7dce374a36ef6da7a1a2f88bf384e66c239d527493d9 registry.k8s.io/kube-apiserver:v1.25.2],SizeBytes:76513774,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:e3c94ddacb1a39f08a66d844b70d29c07327136b7578a3e512a0dde02509bd44 registry.k8s.io/kube-controller-manager:v1.25.2],SizeBytes:64499324,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:63c431d51c9715cd19db89455c69e4277d7282f0cff1fe137170908b4d1dcad1 registry.k8s.io/kube-proxy:v1.25.2],SizeBytes:63270397,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:db84dea8fb911257d8aa41437db54d44dba91d4102ec1872673e7daec026226d registry.k8s.io/kube-scheduler:v1.25.2],SizeBytes:51921020,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 k8s.gcr.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:6477988532358148d2e98f7c747db4e9250bbc7ad2664bf666348abf9ee1f5aa k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0],SizeBytes:22728994,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6e0546563b18872b0aa0cad7255a26bb9a87cb879b7fc3e2383c867ef4f706fb k8s.gcr.io/sig-storage/csi-resizer:v1.3.0],SizeBytes:21671340,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:80dec81b679a733fda448be92a2331150d99095947d04003ecff3dbd7f2a476a k8s.gcr.io/sig-storage/csi-attacher:v3.3.0],SizeBytes:21444261,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 k8s.gcr.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:f9bcee63734b7b01555ee8fc8fb01ac2922478b2c8934bf8d468dd2916edc405 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0],SizeBytes:8582494,},ContainerImage{Names:[k8s.gcr.io/build-image/distroless-iptables@sha256:38e6b091d238094f081efad3e2b362e6480b2156f5f4fba6ea46835ecdcd47e2 k8s.gcr.io/build-image/distroless-iptables:v0.1.1],SizeBytes:7634231,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:55d0552eb6538050ea7741e46b35d27eccffeeaed7010f9f2bad0a89c149bc6f k8s.gcr.io/e2e-test-images/nginx:1.15-2],SizeBytes:7000509,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d k8s.gcr.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 3 01:01:44.077: INFO: Logging kubelet events for node v125-worker Feb 3 01:01:44.080: INFO: Logging pods the kubelet thinks is on node v125-worker Feb 3 01:01:44.102: INFO: kube-proxy-pxrcg started at 2023-01-23 08:54:56 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:44.102: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:44.103: INFO: kindnet-xhfn8 started at 2023-01-23 08:54:56 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:44.103: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:44.103: INFO: create-loop-devs-d5nrm started at 2023-01-23 08:55:01 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:44.103: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:44.327: INFO: Latency metrics for node v125-worker Feb 3 01:01:44.327: INFO: Logging node info for node v125-worker2 Feb 3 01:01:44.331: INFO: Node Info: &Node{ObjectMeta:{v125-worker2 3946dcd4-bff4-4f20-b391-423e1b2e731a 1460729 0 2023-01-23 08:54:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v125-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:v125-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-01-23 08:54:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-23 08:54:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-01-23 08:55:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-02-03 01:01:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {e2e.test Update v1 2023-02-03 01:01:43 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v125/v125-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-02-03 01:01:05 +0000 UTC,LastTransitionTime:2023-01-23 08:54:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-02-03 01:01:05 +0000 UTC,LastTransitionTime:2023-01-23 08:54:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-02-03 01:01:05 +0000 UTC,LastTransitionTime:2023-01-23 08:54:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-02-03 01:01:05 +0000 UTC,LastTransitionTime:2023-01-23 08:55:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.13,},NodeAddress{Type:Hostname,Address:v125-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c2e62a0f27f44f63af0b5bcd0c4b46ec,SystemUUID:7633b4e5-cb2e-46ae-a67f-c14e00012322,BootID:6b29b1db-499d-4992-b2d7-2091a7eaafed,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.25.2,KubeProxyVersion:v1.25.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c k8s.gcr.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[k8s.gcr.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 k8s.gcr.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:d38cf34a86c8798fbd7e7dce374a36ef6da7a1a2f88bf384e66c239d527493d9 registry.k8s.io/kube-apiserver:v1.25.2],SizeBytes:76513774,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:e3c94ddacb1a39f08a66d844b70d29c07327136b7578a3e512a0dde02509bd44 registry.k8s.io/kube-controller-manager:v1.25.2],SizeBytes:64499324,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:63c431d51c9715cd19db89455c69e4277d7282f0cff1fe137170908b4d1dcad1 registry.k8s.io/kube-proxy:v1.25.2],SizeBytes:63270397,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:db84dea8fb911257d8aa41437db54d44dba91d4102ec1872673e7daec026226d registry.k8s.io/kube-scheduler:v1.25.2],SizeBytes:51921020,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 k8s.gcr.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:f9c93b92b6ff750b41a93c4e4fe0bfe384597aeb841e2539d5444815c55b2d8f k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5],SizeBytes:24316368,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:6477988532358148d2e98f7c747db4e9250bbc7ad2664bf666348abf9ee1f5aa k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0],SizeBytes:22728994,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6e0546563b18872b0aa0cad7255a26bb9a87cb879b7fc3e2383c867ef4f706fb k8s.gcr.io/sig-storage/csi-resizer:v1.3.0],SizeBytes:21671340,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:80dec81b679a733fda448be92a2331150d99095947d04003ecff3dbd7f2a476a k8s.gcr.io/sig-storage/csi-attacher:v3.3.0],SizeBytes:21444261,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 k8s.gcr.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:f9bcee63734b7b01555ee8fc8fb01ac2922478b2c8934bf8d468dd2916edc405 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0],SizeBytes:8582494,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:55d0552eb6538050ea7741e46b35d27eccffeeaed7010f9f2bad0a89c149bc6f k8s.gcr.io/e2e-test-images/nginx:1.15-2],SizeBytes:7000509,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d k8s.gcr.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Feb 3 01:01:44.332: INFO: Logging kubelet events for node v125-worker2 Feb 3 01:01:44.335: INFO: Logging pods the kubelet thinks is on node v125-worker2 Feb 3 01:01:44.358: INFO: kindnet-h8fbr started at 2023-01-23 08:54:56 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:44.359: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:01:44.359: INFO: kube-proxy-bvl9x started at 2023-01-23 08:54:56 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:44.359: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:01:44.359: INFO: create-loop-devs-tlwgp started at 2023-01-23 08:55:01 +0000 UTC (0+1 container statuses recorded) Feb 3 01:01:44.359: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:01:44.562: INFO: Latency metrics for node v125-worker2 Feb 3 01:01:44.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6954" for this suite. 02/03/23 01:01:44.566 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 << End Captured GinkgoWriter Output Feb 3 01:01:43.928: failed to create RuntimeClass resource: runtimeclasses.node.k8s.io "test-handler" already exists In [BeforeEach] at: test/e2e/scheduling/predicates.go:248 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms test/e2e/scheduling/priorities.go:124 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 01:01:44.584 Feb 3 01:01:44.584: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 02/03/23 01:01:44.586 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 01:01:44.596 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 01:01:44.6 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 Feb 3 01:01:44.604: INFO: Waiting up to 1m0s for all nodes to be ready Feb 3 01:02:44.628: INFO: Waiting for terminating namespaces to be deleted... Feb 3 01:02:44.631: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 3 01:02:44.645: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 3 01:02:44.645: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 3 01:02:44.652: INFO: ComputeCPUMemFraction for node: v125-worker Feb 3 01:02:44.652: INFO: Pod for on the node: create-loop-devs-d5nrm, Cpu: 100, Mem: 209715200 Feb 3 01:02:44.652: INFO: Pod for on the node: kindnet-xhfn8, Cpu: 100, Mem: 52428800 Feb 3 01:02:44.652: INFO: Pod for on the node: kube-proxy-pxrcg, Cpu: 100, Mem: 209715200 Feb 3 01:02:44.652: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:02:44.652: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 Feb 3 01:02:44.652: INFO: ComputeCPUMemFraction for node: v125-worker2 Feb 3 01:02:44.652: INFO: Pod for on the node: create-loop-devs-tlwgp, Cpu: 100, Mem: 209715200 Feb 3 01:02:44.652: INFO: Pod for on the node: kindnet-h8fbr, Cpu: 100, Mem: 52428800 Feb 3 01:02:44.652: INFO: Pod for on the node: kube-proxy-bvl9x, Cpu: 100, Mem: 209715200 Feb 3 01:02:44.652: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:02:44.652: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms test/e2e/scheduling/priorities.go:124 STEP: Trying to launch a pod with a label to get a node which can launch it. 02/03/23 01:02:44.652 Feb 3 01:02:44.661: INFO: Waiting up to 1m0s for pod "pod-with-label-security-s1" in namespace "sched-priority-5224" to be "running" Feb 3 01:02:44.664: INFO: Pod "pod-with-label-security-s1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.011723ms Feb 3 01:02:46.669: INFO: Pod "pod-with-label-security-s1": Phase="Running", Reason="", readiness=true. Elapsed: 2.007985792s Feb 3 01:02:46.669: INFO: Pod "pod-with-label-security-s1" satisfied condition "running" STEP: Verifying the node has a label kubernetes.io/hostname 02/03/23 01:02:46.672 Feb 3 01:02:46.683: INFO: ComputeCPUMemFraction for node: v125-worker Feb 3 01:02:46.683: INFO: Pod for on the node: create-loop-devs-d5nrm, Cpu: 100, Mem: 209715200 Feb 3 01:02:46.683: INFO: Pod for on the node: kindnet-xhfn8, Cpu: 100, Mem: 52428800 Feb 3 01:02:46.683: INFO: Pod for on the node: kube-proxy-pxrcg, Cpu: 100, Mem: 209715200 Feb 3 01:02:46.683: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Feb 3 01:02:46.683: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:02:46.683: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 Feb 3 01:02:46.683: INFO: ComputeCPUMemFraction for node: v125-worker2 Feb 3 01:02:46.683: INFO: Pod for on the node: create-loop-devs-tlwgp, Cpu: 100, Mem: 209715200 Feb 3 01:02:46.683: INFO: Pod for on the node: kindnet-h8fbr, Cpu: 100, Mem: 52428800 Feb 3 01:02:46.683: INFO: Pod for on the node: kube-proxy-bvl9x, Cpu: 100, Mem: 209715200 Feb 3 01:02:46.683: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:02:46.683: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 Feb 3 01:02:46.688: INFO: Waiting for running... Feb 3 01:02:46.690: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. 02/03/23 01:02:51.749 Feb 3 01:02:51.749: INFO: ComputeCPUMemFraction for node: v125-worker Feb 3 01:02:51.749: INFO: Pod for on the node: create-loop-devs-d5nrm, Cpu: 100, Mem: 209715200 Feb 3 01:02:51.749: INFO: Pod for on the node: kindnet-xhfn8, Cpu: 100, Mem: 52428800 Feb 3 01:02:51.749: INFO: Pod for on the node: kube-proxy-pxrcg, Cpu: 100, Mem: 209715200 Feb 3 01:02:51.749: INFO: Pod for on the node: f0667b7c-cb8a-4e94-9f8b-e1bbd2d36011-0, Cpu: 52599, Mem: 40302553497 Feb 3 01:02:51.749: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Feb 3 01:02:51.749: INFO: Node: v125-worker, totalRequestedCPUResource: 52799, cpuAllocatableMil: 88000, cpuFraction: 0.5999886363636364 Feb 3 01:02:51.749: INFO: Node: v125-worker, totalRequestedMemResource: 40459839897, memAllocatableVal: 67412094976, memFraction: 0.6001866565844672 STEP: Compute Cpu, Mem Fraction after create balanced pods. 02/03/23 01:02:51.749 Feb 3 01:02:51.749: INFO: ComputeCPUMemFraction for node: v125-worker2 Feb 3 01:02:51.749: INFO: Pod for on the node: create-loop-devs-tlwgp, Cpu: 100, Mem: 209715200 Feb 3 01:02:51.749: INFO: Pod for on the node: kindnet-h8fbr, Cpu: 100, Mem: 52428800 Feb 3 01:02:51.749: INFO: Pod for on the node: kube-proxy-bvl9x, Cpu: 100, Mem: 209715200 Feb 3 01:02:51.749: INFO: Pod for on the node: fcb774a5-16e0-4168-b2f8-0ea319a8489a-0, Cpu: 52599, Mem: 40302553497 Feb 3 01:02:51.750: INFO: Node: v125-worker2, totalRequestedCPUResource: 52799, cpuAllocatableMil: 88000, cpuFraction: 0.5999886363636364 Feb 3 01:02:51.750: INFO: Node: v125-worker2, totalRequestedMemResource: 40459839897, memAllocatableVal: 67412094976, memFraction: 0.6001866565844672 STEP: Trying to launch the pod with podAntiAffinity. 02/03/23 01:02:51.75 STEP: Wait the pod becomes running 02/03/23 01:02:51.757 Feb 3 01:02:51.758: INFO: Waiting up to 5m0s for pod "pod-with-pod-antiaffinity" in namespace "sched-priority-5224" to be "running" Feb 3 01:02:51.762: INFO: Pod "pod-with-pod-antiaffinity": Phase="Pending", Reason="", readiness=false. Elapsed: 4.801295ms Feb 3 01:02:53.768: INFO: Pod "pod-with-pod-antiaffinity": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010133911s Feb 3 01:02:55.767: INFO: Pod "pod-with-pod-antiaffinity": Phase="Running", Reason="", readiness=true. Elapsed: 4.008972865s Feb 3 01:02:55.767: INFO: Pod "pod-with-pod-antiaffinity" satisfied condition "running" STEP: Verify the pod was scheduled to the expected node. 02/03/23 01:02:55.77 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:187 Feb 3 01:02:57.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-5224" for this suite. 02/03/23 01:02:57.791 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","completed":8,"skipped":4631,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for"]} ------------------------------ • [SLOW TEST] [73.212 seconds] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms test/e2e/scheduling/priorities.go:124 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 01:01:44.584 Feb 3 01:01:44.584: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 02/03/23 01:01:44.586 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 01:01:44.596 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 01:01:44.6 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 Feb 3 01:01:44.604: INFO: Waiting up to 1m0s for all nodes to be ready Feb 3 01:02:44.628: INFO: Waiting for terminating namespaces to be deleted... Feb 3 01:02:44.631: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 3 01:02:44.645: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 3 01:02:44.645: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 3 01:02:44.652: INFO: ComputeCPUMemFraction for node: v125-worker Feb 3 01:02:44.652: INFO: Pod for on the node: create-loop-devs-d5nrm, Cpu: 100, Mem: 209715200 Feb 3 01:02:44.652: INFO: Pod for on the node: kindnet-xhfn8, Cpu: 100, Mem: 52428800 Feb 3 01:02:44.652: INFO: Pod for on the node: kube-proxy-pxrcg, Cpu: 100, Mem: 209715200 Feb 3 01:02:44.652: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:02:44.652: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 Feb 3 01:02:44.652: INFO: ComputeCPUMemFraction for node: v125-worker2 Feb 3 01:02:44.652: INFO: Pod for on the node: create-loop-devs-tlwgp, Cpu: 100, Mem: 209715200 Feb 3 01:02:44.652: INFO: Pod for on the node: kindnet-h8fbr, Cpu: 100, Mem: 52428800 Feb 3 01:02:44.652: INFO: Pod for on the node: kube-proxy-bvl9x, Cpu: 100, Mem: 209715200 Feb 3 01:02:44.652: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:02:44.652: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms test/e2e/scheduling/priorities.go:124 STEP: Trying to launch a pod with a label to get a node which can launch it. 02/03/23 01:02:44.652 Feb 3 01:02:44.661: INFO: Waiting up to 1m0s for pod "pod-with-label-security-s1" in namespace "sched-priority-5224" to be "running" Feb 3 01:02:44.664: INFO: Pod "pod-with-label-security-s1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.011723ms Feb 3 01:02:46.669: INFO: Pod "pod-with-label-security-s1": Phase="Running", Reason="", readiness=true. Elapsed: 2.007985792s Feb 3 01:02:46.669: INFO: Pod "pod-with-label-security-s1" satisfied condition "running" STEP: Verifying the node has a label kubernetes.io/hostname 02/03/23 01:02:46.672 Feb 3 01:02:46.683: INFO: ComputeCPUMemFraction for node: v125-worker Feb 3 01:02:46.683: INFO: Pod for on the node: create-loop-devs-d5nrm, Cpu: 100, Mem: 209715200 Feb 3 01:02:46.683: INFO: Pod for on the node: kindnet-xhfn8, Cpu: 100, Mem: 52428800 Feb 3 01:02:46.683: INFO: Pod for on the node: kube-proxy-pxrcg, Cpu: 100, Mem: 209715200 Feb 3 01:02:46.683: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Feb 3 01:02:46.683: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:02:46.683: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 Feb 3 01:02:46.683: INFO: ComputeCPUMemFraction for node: v125-worker2 Feb 3 01:02:46.683: INFO: Pod for on the node: create-loop-devs-tlwgp, Cpu: 100, Mem: 209715200 Feb 3 01:02:46.683: INFO: Pod for on the node: kindnet-h8fbr, Cpu: 100, Mem: 52428800 Feb 3 01:02:46.683: INFO: Pod for on the node: kube-proxy-bvl9x, Cpu: 100, Mem: 209715200 Feb 3 01:02:46.683: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:02:46.683: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 Feb 3 01:02:46.688: INFO: Waiting for running... Feb 3 01:02:46.690: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. 02/03/23 01:02:51.749 Feb 3 01:02:51.749: INFO: ComputeCPUMemFraction for node: v125-worker Feb 3 01:02:51.749: INFO: Pod for on the node: create-loop-devs-d5nrm, Cpu: 100, Mem: 209715200 Feb 3 01:02:51.749: INFO: Pod for on the node: kindnet-xhfn8, Cpu: 100, Mem: 52428800 Feb 3 01:02:51.749: INFO: Pod for on the node: kube-proxy-pxrcg, Cpu: 100, Mem: 209715200 Feb 3 01:02:51.749: INFO: Pod for on the node: f0667b7c-cb8a-4e94-9f8b-e1bbd2d36011-0, Cpu: 52599, Mem: 40302553497 Feb 3 01:02:51.749: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Feb 3 01:02:51.749: INFO: Node: v125-worker, totalRequestedCPUResource: 52799, cpuAllocatableMil: 88000, cpuFraction: 0.5999886363636364 Feb 3 01:02:51.749: INFO: Node: v125-worker, totalRequestedMemResource: 40459839897, memAllocatableVal: 67412094976, memFraction: 0.6001866565844672 STEP: Compute Cpu, Mem Fraction after create balanced pods. 02/03/23 01:02:51.749 Feb 3 01:02:51.749: INFO: ComputeCPUMemFraction for node: v125-worker2 Feb 3 01:02:51.749: INFO: Pod for on the node: create-loop-devs-tlwgp, Cpu: 100, Mem: 209715200 Feb 3 01:02:51.749: INFO: Pod for on the node: kindnet-h8fbr, Cpu: 100, Mem: 52428800 Feb 3 01:02:51.749: INFO: Pod for on the node: kube-proxy-bvl9x, Cpu: 100, Mem: 209715200 Feb 3 01:02:51.749: INFO: Pod for on the node: fcb774a5-16e0-4168-b2f8-0ea319a8489a-0, Cpu: 52599, Mem: 40302553497 Feb 3 01:02:51.750: INFO: Node: v125-worker2, totalRequestedCPUResource: 52799, cpuAllocatableMil: 88000, cpuFraction: 0.5999886363636364 Feb 3 01:02:51.750: INFO: Node: v125-worker2, totalRequestedMemResource: 40459839897, memAllocatableVal: 67412094976, memFraction: 0.6001866565844672 STEP: Trying to launch the pod with podAntiAffinity. 02/03/23 01:02:51.75 STEP: Wait the pod becomes running 02/03/23 01:02:51.757 Feb 3 01:02:51.758: INFO: Waiting up to 5m0s for pod "pod-with-pod-antiaffinity" in namespace "sched-priority-5224" to be "running" Feb 3 01:02:51.762: INFO: Pod "pod-with-pod-antiaffinity": Phase="Pending", Reason="", readiness=false. Elapsed: 4.801295ms Feb 3 01:02:53.768: INFO: Pod "pod-with-pod-antiaffinity": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010133911s Feb 3 01:02:55.767: INFO: Pod "pod-with-pod-antiaffinity": Phase="Running", Reason="", readiness=true. Elapsed: 4.008972865s Feb 3 01:02:55.767: INFO: Pod "pod-with-pod-antiaffinity" satisfied condition "running" STEP: Verify the pod was scheduled to the expected node. 02/03/23 01:02:55.77 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:187 Feb 3 01:02:57.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-5224" for this suite. 02/03/23 01:02:57.791 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching test/e2e/scheduling/predicates.go:534 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 01:02:57.818 Feb 3 01:02:57.818: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/03/23 01:02:57.82 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 01:02:57.83 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 01:02:57.834 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 3 01:02:57.838: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 01:02:57.845: INFO: Waiting for terminating namespaces to be deleted... Feb 3 01:02:57.848: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 3 01:02:57.853: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:02:57.853: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:02:57.853: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:02:57.853: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:02:57.853: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:02:57.853: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:02:57.853: INFO: pod-with-label-security-s1 from sched-priority-5224 started at 2023-02-03 01:02:44 +0000 UTC (1 container statuses recorded) Feb 3 01:02:57.853: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 Feb 3 01:02:57.853: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 3 01:02:57.859: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:02:57.859: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:02:57.859: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:02:57.859: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:02:57.859: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:02:57.859: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:02:57.859: INFO: pod-with-pod-antiaffinity from sched-priority-5224 started at 2023-02-03 01:02:51 +0000 UTC (1 container statuses recorded) Feb 3 01:02:57.859: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching test/e2e/scheduling/predicates.go:534 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/03/23 01:02:57.859 Feb 3 01:02:57.866: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-9385" to be "running" Feb 3 01:02:57.869: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.888265ms Feb 3 01:02:59.873: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007345037s Feb 3 01:02:59.873: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 01:02:59.876 STEP: Trying to apply a random label on the found node. 02/03/23 01:02:59.883 STEP: verifying the node has the label kubernetes.io/e2e-27d8396d-efcb-449b-af02-ae2b938a6170 42 02/03/23 01:02:59.894 STEP: Trying to relaunch the pod, now with labels. 02/03/23 01:02:59.898 Feb 3 01:02:59.902: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-9385" to be "not pending" Feb 3 01:02:59.905: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 3.088043ms Feb 3 01:03:01.910: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 2.007924544s Feb 3 01:03:01.910: INFO: Pod "with-labels" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-27d8396d-efcb-449b-af02-ae2b938a6170 off the node v125-worker 02/03/23 01:03:01.913 STEP: verifying the node doesn't have the label kubernetes.io/e2e-27d8396d-efcb-449b-af02-ae2b938a6170 02/03/23 01:03:01.927 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 3 01:03:01.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9385" for this suite. 02/03/23 01:03:01.935 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","completed":9,"skipped":4954,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for"]} ------------------------------ • [4.122 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching test/e2e/scheduling/predicates.go:534 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 01:02:57.818 Feb 3 01:02:57.818: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/03/23 01:02:57.82 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 01:02:57.83 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 01:02:57.834 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 3 01:02:57.838: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 01:02:57.845: INFO: Waiting for terminating namespaces to be deleted... Feb 3 01:02:57.848: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 3 01:02:57.853: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:02:57.853: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:02:57.853: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:02:57.853: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:02:57.853: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:02:57.853: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:02:57.853: INFO: pod-with-label-security-s1 from sched-priority-5224 started at 2023-02-03 01:02:44 +0000 UTC (1 container statuses recorded) Feb 3 01:02:57.853: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 Feb 3 01:02:57.853: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 3 01:02:57.859: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 01:02:57.859: INFO: Container loopdev ready: true, restart count 0 Feb 3 01:02:57.859: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:02:57.859: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 01:02:57.859: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 01:02:57.859: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 01:02:57.859: INFO: pod-with-pod-antiaffinity from sched-priority-5224 started at 2023-02-03 01:02:51 +0000 UTC (1 container statuses recorded) Feb 3 01:02:57.859: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching test/e2e/scheduling/predicates.go:534 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/03/23 01:02:57.859 Feb 3 01:02:57.866: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-9385" to be "running" Feb 3 01:02:57.869: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.888265ms Feb 3 01:02:59.873: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007345037s Feb 3 01:02:59.873: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 01:02:59.876 STEP: Trying to apply a random label on the found node. 02/03/23 01:02:59.883 STEP: verifying the node has the label kubernetes.io/e2e-27d8396d-efcb-449b-af02-ae2b938a6170 42 02/03/23 01:02:59.894 STEP: Trying to relaunch the pod, now with labels. 02/03/23 01:02:59.898 Feb 3 01:02:59.902: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-9385" to be "not pending" Feb 3 01:02:59.905: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 3.088043ms Feb 3 01:03:01.910: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 2.007924544s Feb 3 01:03:01.910: INFO: Pod "with-labels" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-27d8396d-efcb-449b-af02-ae2b938a6170 off the node v125-worker 02/03/23 01:03:01.913 STEP: verifying the node doesn't have the label kubernetes.io/e2e-27d8396d-efcb-449b-af02-ae2b938a6170 02/03/23 01:03:01.927 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 3 01:03:01.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9385" for this suite. 02/03/23 01:03:01.935 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed test/e2e/scheduling/priorities.go:288 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 01:03:01.954 Feb 3 01:03:01.954: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 02/03/23 01:03:01.955 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 01:03:01.964 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 01:03:01.967 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 Feb 3 01:03:01.970: INFO: Waiting up to 1m0s for all nodes to be ready Feb 3 01:04:01.993: INFO: Waiting for terminating namespaces to be deleted... Feb 3 01:04:01.996: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 3 01:04:02.009: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 3 01:04:02.009: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 3 01:04:02.016: INFO: ComputeCPUMemFraction for node: v125-worker Feb 3 01:04:02.016: INFO: Pod for on the node: create-loop-devs-d5nrm, Cpu: 100, Mem: 209715200 Feb 3 01:04:02.016: INFO: Pod for on the node: kindnet-xhfn8, Cpu: 100, Mem: 52428800 Feb 3 01:04:02.016: INFO: Pod for on the node: kube-proxy-pxrcg, Cpu: 100, Mem: 209715200 Feb 3 01:04:02.016: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:04:02.016: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 Feb 3 01:04:02.016: INFO: ComputeCPUMemFraction for node: v125-worker2 Feb 3 01:04:02.016: INFO: Pod for on the node: create-loop-devs-tlwgp, Cpu: 100, Mem: 209715200 Feb 3 01:04:02.016: INFO: Pod for on the node: kindnet-h8fbr, Cpu: 100, Mem: 52428800 Feb 3 01:04:02.016: INFO: Pod for on the node: kube-proxy-bvl9x, Cpu: 100, Mem: 209715200 Feb 3 01:04:02.016: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:04:02.016: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 [BeforeEach] PodTopologySpread Scoring test/e2e/scheduling/priorities.go:271 STEP: Trying to get 2 available nodes which can run pod 02/03/23 01:04:02.016 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/03/23 01:04:02.016 Feb 3 01:04:02.026: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-priority-3272" to be "running" Feb 3 01:04:02.029: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.007129ms Feb 3 01:04:04.034: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.008076772s Feb 3 01:04:04.034: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 01:04:04.037 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/03/23 01:04:04.045 Feb 3 01:04:04.050: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-priority-3272" to be "running" Feb 3 01:04:04.052: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.713512ms Feb 3 01:04:06.057: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007421891s Feb 3 01:04:06.057: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 01:04:06.06 STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. 02/03/23 01:04:06.069 [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed test/e2e/scheduling/priorities.go:288 Feb 3 01:04:06.105: INFO: ComputeCPUMemFraction for node: v125-worker2 Feb 3 01:04:06.105: INFO: Pod for on the node: create-loop-devs-tlwgp, Cpu: 100, Mem: 209715200 Feb 3 01:04:06.105: INFO: Pod for on the node: kindnet-h8fbr, Cpu: 100, Mem: 52428800 Feb 3 01:04:06.105: INFO: Pod for on the node: kube-proxy-bvl9x, Cpu: 100, Mem: 209715200 Feb 3 01:04:06.105: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:04:06.105: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 Feb 3 01:04:06.105: INFO: ComputeCPUMemFraction for node: v125-worker Feb 3 01:04:06.105: INFO: Pod for on the node: create-loop-devs-d5nrm, Cpu: 100, Mem: 209715200 Feb 3 01:04:06.105: INFO: Pod for on the node: kindnet-xhfn8, Cpu: 100, Mem: 52428800 Feb 3 01:04:06.105: INFO: Pod for on the node: kube-proxy-pxrcg, Cpu: 100, Mem: 209715200 Feb 3 01:04:06.105: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:04:06.105: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 Feb 3 01:04:06.110: INFO: Waiting for running... Feb 3 01:04:06.110: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. 02/03/23 01:04:11.171 Feb 3 01:04:11.171: INFO: ComputeCPUMemFraction for node: v125-worker2 Feb 3 01:04:11.171: INFO: Pod for on the node: create-loop-devs-tlwgp, Cpu: 100, Mem: 209715200 Feb 3 01:04:11.171: INFO: Pod for on the node: kindnet-h8fbr, Cpu: 100, Mem: 52428800 Feb 3 01:04:11.171: INFO: Pod for on the node: kube-proxy-bvl9x, Cpu: 100, Mem: 209715200 Feb 3 01:04:11.171: INFO: Pod for on the node: d7d9b17b-75df-410a-afcf-b30a433dbd8d-0, Cpu: 43800, Mem: 33561344000 Feb 3 01:04:11.171: INFO: Node: v125-worker2, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 Feb 3 01:04:11.171: INFO: Node: v125-worker2, totalRequestedMemResource: 33718630400, memAllocatableVal: 67412094976, memFraction: 0.5001866565933677 STEP: Compute Cpu, Mem Fraction after create balanced pods. 02/03/23 01:04:11.171 Feb 3 01:04:11.171: INFO: ComputeCPUMemFraction for node: v125-worker Feb 3 01:04:11.171: INFO: Pod for on the node: create-loop-devs-d5nrm, Cpu: 100, Mem: 209715200 Feb 3 01:04:11.171: INFO: Pod for on the node: kindnet-xhfn8, Cpu: 100, Mem: 52428800 Feb 3 01:04:11.171: INFO: Pod for on the node: kube-proxy-pxrcg, Cpu: 100, Mem: 209715200 Feb 3 01:04:11.171: INFO: Pod for on the node: 0cf38912-1772-40ee-a9bd-30b065033a22-0, Cpu: 43800, Mem: 33561344000 Feb 3 01:04:11.171: INFO: Node: v125-worker, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 Feb 3 01:04:11.171: INFO: Node: v125-worker, totalRequestedMemResource: 33718630400, memAllocatableVal: 67412094976, memFraction: 0.5001866565933677 STEP: Run a ReplicaSet with 4 replicas on node "v125-worker2" 02/03/23 01:04:11.171 Feb 3 01:04:15.188: INFO: Waiting up to 1m0s for pod "test-pod" in namespace "sched-priority-3272" to be "running" Feb 3 01:04:15.192: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.043333ms Feb 3 01:04:17.195: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.006438018s Feb 3 01:04:17.195: INFO: Pod "test-pod" satisfied condition "running" STEP: Verifying if the test-pod lands on node "v125-worker" 02/03/23 01:04:17.198 [AfterEach] PodTopologySpread Scoring test/e2e/scheduling/priorities.go:282 STEP: removing the label kubernetes.io/e2e-pts-score off the node v125-worker2 02/03/23 01:04:19.216 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score 02/03/23 01:04:19.23 STEP: removing the label kubernetes.io/e2e-pts-score off the node v125-worker 02/03/23 01:04:19.233 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score 02/03/23 01:04:19.245 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:187 Feb 3 01:04:19.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-3272" for this suite. 02/03/23 01:04:19.252 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","completed":10,"skipped":5294,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for"]} ------------------------------ • [SLOW TEST] [77.302 seconds] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring test/e2e/scheduling/priorities.go:267 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed test/e2e/scheduling/priorities.go:288 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 01:03:01.954 Feb 3 01:03:01.954: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 02/03/23 01:03:01.955 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 01:03:01.964 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 01:03:01.967 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 Feb 3 01:03:01.970: INFO: Waiting up to 1m0s for all nodes to be ready Feb 3 01:04:01.993: INFO: Waiting for terminating namespaces to be deleted... Feb 3 01:04:01.996: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 3 01:04:02.009: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 3 01:04:02.009: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 3 01:04:02.016: INFO: ComputeCPUMemFraction for node: v125-worker Feb 3 01:04:02.016: INFO: Pod for on the node: create-loop-devs-d5nrm, Cpu: 100, Mem: 209715200 Feb 3 01:04:02.016: INFO: Pod for on the node: kindnet-xhfn8, Cpu: 100, Mem: 52428800 Feb 3 01:04:02.016: INFO: Pod for on the node: kube-proxy-pxrcg, Cpu: 100, Mem: 209715200 Feb 3 01:04:02.016: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:04:02.016: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 Feb 3 01:04:02.016: INFO: ComputeCPUMemFraction for node: v125-worker2 Feb 3 01:04:02.016: INFO: Pod for on the node: create-loop-devs-tlwgp, Cpu: 100, Mem: 209715200 Feb 3 01:04:02.016: INFO: Pod for on the node: kindnet-h8fbr, Cpu: 100, Mem: 52428800 Feb 3 01:04:02.016: INFO: Pod for on the node: kube-proxy-bvl9x, Cpu: 100, Mem: 209715200 Feb 3 01:04:02.016: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:04:02.016: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 [BeforeEach] PodTopologySpread Scoring test/e2e/scheduling/priorities.go:271 STEP: Trying to get 2 available nodes which can run pod 02/03/23 01:04:02.016 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/03/23 01:04:02.016 Feb 3 01:04:02.026: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-priority-3272" to be "running" Feb 3 01:04:02.029: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.007129ms Feb 3 01:04:04.034: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.008076772s Feb 3 01:04:04.034: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 01:04:04.037 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/03/23 01:04:04.045 Feb 3 01:04:04.050: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-priority-3272" to be "running" Feb 3 01:04:04.052: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.713512ms Feb 3 01:04:06.057: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007421891s Feb 3 01:04:06.057: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 01:04:06.06 STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. 02/03/23 01:04:06.069 [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed test/e2e/scheduling/priorities.go:288 Feb 3 01:04:06.105: INFO: ComputeCPUMemFraction for node: v125-worker2 Feb 3 01:04:06.105: INFO: Pod for on the node: create-loop-devs-tlwgp, Cpu: 100, Mem: 209715200 Feb 3 01:04:06.105: INFO: Pod for on the node: kindnet-h8fbr, Cpu: 100, Mem: 52428800 Feb 3 01:04:06.105: INFO: Pod for on the node: kube-proxy-bvl9x, Cpu: 100, Mem: 209715200 Feb 3 01:04:06.105: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:04:06.105: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 Feb 3 01:04:06.105: INFO: ComputeCPUMemFraction for node: v125-worker Feb 3 01:04:06.105: INFO: Pod for on the node: create-loop-devs-d5nrm, Cpu: 100, Mem: 209715200 Feb 3 01:04:06.105: INFO: Pod for on the node: kindnet-xhfn8, Cpu: 100, Mem: 52428800 Feb 3 01:04:06.105: INFO: Pod for on the node: kube-proxy-pxrcg, Cpu: 100, Mem: 209715200 Feb 3 01:04:06.105: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:04:06.105: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 Feb 3 01:04:06.110: INFO: Waiting for running... Feb 3 01:04:06.110: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. 02/03/23 01:04:11.171 Feb 3 01:04:11.171: INFO: ComputeCPUMemFraction for node: v125-worker2 Feb 3 01:04:11.171: INFO: Pod for on the node: create-loop-devs-tlwgp, Cpu: 100, Mem: 209715200 Feb 3 01:04:11.171: INFO: Pod for on the node: kindnet-h8fbr, Cpu: 100, Mem: 52428800 Feb 3 01:04:11.171: INFO: Pod for on the node: kube-proxy-bvl9x, Cpu: 100, Mem: 209715200 Feb 3 01:04:11.171: INFO: Pod for on the node: d7d9b17b-75df-410a-afcf-b30a433dbd8d-0, Cpu: 43800, Mem: 33561344000 Feb 3 01:04:11.171: INFO: Node: v125-worker2, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 Feb 3 01:04:11.171: INFO: Node: v125-worker2, totalRequestedMemResource: 33718630400, memAllocatableVal: 67412094976, memFraction: 0.5001866565933677 STEP: Compute Cpu, Mem Fraction after create balanced pods. 02/03/23 01:04:11.171 Feb 3 01:04:11.171: INFO: ComputeCPUMemFraction for node: v125-worker Feb 3 01:04:11.171: INFO: Pod for on the node: create-loop-devs-d5nrm, Cpu: 100, Mem: 209715200 Feb 3 01:04:11.171: INFO: Pod for on the node: kindnet-xhfn8, Cpu: 100, Mem: 52428800 Feb 3 01:04:11.171: INFO: Pod for on the node: kube-proxy-pxrcg, Cpu: 100, Mem: 209715200 Feb 3 01:04:11.171: INFO: Pod for on the node: 0cf38912-1772-40ee-a9bd-30b065033a22-0, Cpu: 43800, Mem: 33561344000 Feb 3 01:04:11.171: INFO: Node: v125-worker, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 Feb 3 01:04:11.171: INFO: Node: v125-worker, totalRequestedMemResource: 33718630400, memAllocatableVal: 67412094976, memFraction: 0.5001866565933677 STEP: Run a ReplicaSet with 4 replicas on node "v125-worker2" 02/03/23 01:04:11.171 Feb 3 01:04:15.188: INFO: Waiting up to 1m0s for pod "test-pod" in namespace "sched-priority-3272" to be "running" Feb 3 01:04:15.192: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.043333ms Feb 3 01:04:17.195: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.006438018s Feb 3 01:04:17.195: INFO: Pod "test-pod" satisfied condition "running" STEP: Verifying if the test-pod lands on node "v125-worker" 02/03/23 01:04:17.198 [AfterEach] PodTopologySpread Scoring test/e2e/scheduling/priorities.go:282 STEP: removing the label kubernetes.io/e2e-pts-score off the node v125-worker2 02/03/23 01:04:19.216 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score 02/03/23 01:04:19.23 STEP: removing the label kubernetes.io/e2e-pts-score off the node v125-worker 02/03/23 01:04:19.233 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score 02/03/23 01:04:19.245 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:187 Feb 3 01:04:19.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-3272" for this suite. 02/03/23 01:04:19.252 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate test/e2e/scheduling/priorities.go:208 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 01:04:19.316 Feb 3 01:04:19.316: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 02/03/23 01:04:19.317 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 01:04:19.327 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 01:04:19.33 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 Feb 3 01:04:19.333: INFO: Waiting up to 1m0s for all nodes to be ready Feb 3 01:05:19.359: INFO: Waiting for terminating namespaces to be deleted... Feb 3 01:05:19.362: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 3 01:05:19.374: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 3 01:05:19.374: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 3 01:05:19.380: INFO: ComputeCPUMemFraction for node: v125-worker Feb 3 01:05:19.380: INFO: Pod for on the node: create-loop-devs-d5nrm, Cpu: 100, Mem: 209715200 Feb 3 01:05:19.380: INFO: Pod for on the node: kindnet-xhfn8, Cpu: 100, Mem: 52428800 Feb 3 01:05:19.380: INFO: Pod for on the node: kube-proxy-pxrcg, Cpu: 100, Mem: 209715200 Feb 3 01:05:19.380: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:05:19.380: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 Feb 3 01:05:19.380: INFO: ComputeCPUMemFraction for node: v125-worker2 Feb 3 01:05:19.380: INFO: Pod for on the node: create-loop-devs-tlwgp, Cpu: 100, Mem: 209715200 Feb 3 01:05:19.380: INFO: Pod for on the node: kindnet-h8fbr, Cpu: 100, Mem: 52428800 Feb 3 01:05:19.380: INFO: Pod for on the node: kube-proxy-bvl9x, Cpu: 100, Mem: 209715200 Feb 3 01:05:19.380: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:05:19.380: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 [It] Pod should be preferably scheduled to nodes pod can tolerate test/e2e/scheduling/priorities.go:208 Feb 3 01:05:19.389: INFO: ComputeCPUMemFraction for node: v125-worker Feb 3 01:05:19.389: INFO: Pod for on the node: create-loop-devs-d5nrm, Cpu: 100, Mem: 209715200 Feb 3 01:05:19.389: INFO: Pod for on the node: kindnet-xhfn8, Cpu: 100, Mem: 52428800 Feb 3 01:05:19.389: INFO: Pod for on the node: kube-proxy-pxrcg, Cpu: 100, Mem: 209715200 Feb 3 01:05:19.389: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:05:19.389: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 Feb 3 01:05:19.389: INFO: ComputeCPUMemFraction for node: v125-worker2 Feb 3 01:05:19.389: INFO: Pod for on the node: create-loop-devs-tlwgp, Cpu: 100, Mem: 209715200 Feb 3 01:05:19.389: INFO: Pod for on the node: kindnet-h8fbr, Cpu: 100, Mem: 52428800 Feb 3 01:05:19.389: INFO: Pod for on the node: kube-proxy-bvl9x, Cpu: 100, Mem: 209715200 Feb 3 01:05:19.389: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:05:19.389: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 Feb 3 01:05:19.401: INFO: Waiting for running... Feb 3 01:05:19.401: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. 02/03/23 01:05:24.462 Feb 3 01:05:24.463: INFO: ComputeCPUMemFraction for node: v125-worker Feb 3 01:05:24.463: INFO: Pod for on the node: create-loop-devs-d5nrm, Cpu: 100, Mem: 209715200 Feb 3 01:05:24.463: INFO: Pod for on the node: kindnet-xhfn8, Cpu: 100, Mem: 52428800 Feb 3 01:05:24.463: INFO: Pod for on the node: kube-proxy-pxrcg, Cpu: 100, Mem: 209715200 Feb 3 01:05:24.463: INFO: Pod for on the node: 20ec6083-0ba4-4256-b452-d294a644a5aa-0, Cpu: 43800, Mem: 33561344000 Feb 3 01:05:24.463: INFO: Node: v125-worker, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 Feb 3 01:05:24.463: INFO: Node: v125-worker, totalRequestedMemResource: 33718630400, memAllocatableVal: 67412094976, memFraction: 0.5001866565933677 STEP: Compute Cpu, Mem Fraction after create balanced pods. 02/03/23 01:05:24.463 Feb 3 01:05:24.463: INFO: ComputeCPUMemFraction for node: v125-worker2 Feb 3 01:05:24.463: INFO: Pod for on the node: create-loop-devs-tlwgp, Cpu: 100, Mem: 209715200 Feb 3 01:05:24.463: INFO: Pod for on the node: kindnet-h8fbr, Cpu: 100, Mem: 52428800 Feb 3 01:05:24.463: INFO: Pod for on the node: kube-proxy-bvl9x, Cpu: 100, Mem: 209715200 Feb 3 01:05:24.463: INFO: Pod for on the node: b052ead4-4216-49b3-8577-a4b3ce8ece1b-0, Cpu: 43800, Mem: 33561344000 Feb 3 01:05:24.463: INFO: Node: v125-worker2, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 Feb 3 01:05:24.463: INFO: Node: v125-worker2, totalRequestedMemResource: 33718630400, memAllocatableVal: 67412094976, memFraction: 0.5001866565933677 STEP: Trying to apply 10 (tolerable) taints on the first node. 02/03/23 01:05:24.463 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-e2e34241-85c2-4a9d-8583=testing-taint-value-7b9db073-2447-4905-886d-f10d8ae0daca:PreferNoSchedule 02/03/23 01:05:24.478 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-694706ac-c8bd-44c4-bbf0=testing-taint-value-53ee0cb2-da88-4de1-90eb-bba3ae2cbbbf:PreferNoSchedule 02/03/23 01:05:24.496 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-1348da20-47d2-42ef-aea5=testing-taint-value-7ef66f77-985d-43a3-b704-b2f72cfff2ba:PreferNoSchedule 02/03/23 01:05:24.514 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ccd95015-95f0-4ad3-b0f2=testing-taint-value-da90fc13-12c1-4951-8763-70f78a6361be:PreferNoSchedule 02/03/23 01:05:24.532 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-7fe143fc-2f0e-4a4b-ae46=testing-taint-value-c2c6b585-5df5-482f-8fc0-9fd3e26c34df:PreferNoSchedule 02/03/23 01:05:24.55 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-e3b3d62a-a2c1-4525-a4b9=testing-taint-value-9d31f54c-002b-4aa2-814e-06f517086ab9:PreferNoSchedule 02/03/23 01:05:24.568 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-cf49c1b4-d68b-4f38-b39d=testing-taint-value-398fbf58-e33b-4c30-a6f1-7cfc1644d605:PreferNoSchedule 02/03/23 01:05:24.586 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-7b41a212-07ff-4769-9095=testing-taint-value-840964a4-0783-41a3-98bb-200f7ac82523:PreferNoSchedule 02/03/23 01:05:24.604 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-54cea8b3-fad8-43c4-8fcc=testing-taint-value-5b02de93-2174-430a-a4de-94981b4657dd:PreferNoSchedule 02/03/23 01:05:24.623 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-0d37d2af-0361-4300-85e3=testing-taint-value-947ba39e-5a13-4f63-919f-51a5a99d0e9d:PreferNoSchedule 02/03/23 01:05:24.641 STEP: Adding 10 intolerable taints to all other nodes 02/03/23 01:05:24.645 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ca51189c-e8ec-4552-aaa4=testing-taint-value-e9eaef3e-8013-4009-820f-cb4e0c5f25b1:PreferNoSchedule 02/03/23 01:05:24.658 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-9b6c4388-35d4-41d2-8c73=testing-taint-value-31a7d178-8419-4a05-903d-cd6364cb9fd2:PreferNoSchedule 02/03/23 01:05:24.676 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-7054d269-46d4-4e14-ba3e=testing-taint-value-e622f1d3-986e-48bf-80b8-d7c53ae0b34f:PreferNoSchedule 02/03/23 01:05:24.695 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ba1c0ad3-3c05-489d-a231=testing-taint-value-e68b2fb3-6273-4d29-aa5e-eaabf9f19208:PreferNoSchedule 02/03/23 01:05:24.713 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ef9c2138-42b3-4f80-87c4=testing-taint-value-ec2c5715-9eb8-47cd-8af1-c22f0e3bbde7:PreferNoSchedule 02/03/23 01:05:24.731 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-69300830-ea48-4d51-8592=testing-taint-value-32e8e722-6be8-48b9-8c33-9b04cc3f0b19:PreferNoSchedule 02/03/23 01:05:24.749 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-dad52e73-5bb5-4be7-9d12=testing-taint-value-83c34b91-42be-4a0f-8e15-858cde311ba4:PreferNoSchedule 02/03/23 01:05:24.767 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-2626ed5f-bddd-479b-ac07=testing-taint-value-e30ecb1f-c6c1-423d-9305-a0ae3a8127ef:PreferNoSchedule 02/03/23 01:05:24.785 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-df8cb1df-9f5d-4e4d-977c=testing-taint-value-bb223229-b521-43d6-a3df-6e716c49938b:PreferNoSchedule 02/03/23 01:05:24.816 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-5ca9876c-5519-4eaa-91c5=testing-taint-value-82f432df-cf65-4656-86c0-b9645db08c9b:PreferNoSchedule 02/03/23 01:05:24.966 STEP: Create a pod that tolerates all the taints of the first node. 02/03/23 01:05:25.008 Feb 3 01:05:25.060: INFO: Waiting up to 5m0s for pod "with-tolerations" in namespace "sched-priority-7211" to be "running" Feb 3 01:05:25.108: INFO: Pod "with-tolerations": Phase="Pending", Reason="", readiness=false. Elapsed: 47.481996ms Feb 3 01:05:27.111: INFO: Pod "with-tolerations": Phase="Running", Reason="", readiness=true. Elapsed: 2.050971045s Feb 3 01:05:27.111: INFO: Pod "with-tolerations" satisfied condition "running" STEP: Pod should prefer scheduled to the node that pod can tolerate. 02/03/23 01:05:27.111 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ca51189c-e8ec-4552-aaa4=testing-taint-value-e9eaef3e-8013-4009-820f-cb4e0c5f25b1:PreferNoSchedule 02/03/23 01:05:27.13 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-9b6c4388-35d4-41d2-8c73=testing-taint-value-31a7d178-8419-4a05-903d-cd6364cb9fd2:PreferNoSchedule 02/03/23 01:05:27.149 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-7054d269-46d4-4e14-ba3e=testing-taint-value-e622f1d3-986e-48bf-80b8-d7c53ae0b34f:PreferNoSchedule 02/03/23 01:05:27.167 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ba1c0ad3-3c05-489d-a231=testing-taint-value-e68b2fb3-6273-4d29-aa5e-eaabf9f19208:PreferNoSchedule 02/03/23 01:05:27.19 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ef9c2138-42b3-4f80-87c4=testing-taint-value-ec2c5715-9eb8-47cd-8af1-c22f0e3bbde7:PreferNoSchedule 02/03/23 01:05:27.207 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-69300830-ea48-4d51-8592=testing-taint-value-32e8e722-6be8-48b9-8c33-9b04cc3f0b19:PreferNoSchedule 02/03/23 01:05:27.225 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-dad52e73-5bb5-4be7-9d12=testing-taint-value-83c34b91-42be-4a0f-8e15-858cde311ba4:PreferNoSchedule 02/03/23 01:05:27.241 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-2626ed5f-bddd-479b-ac07=testing-taint-value-e30ecb1f-c6c1-423d-9305-a0ae3a8127ef:PreferNoSchedule 02/03/23 01:05:27.258 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-df8cb1df-9f5d-4e4d-977c=testing-taint-value-bb223229-b521-43d6-a3df-6e716c49938b:PreferNoSchedule 02/03/23 01:05:27.274 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-5ca9876c-5519-4eaa-91c5=testing-taint-value-82f432df-cf65-4656-86c0-b9645db08c9b:PreferNoSchedule 02/03/23 01:05:27.29 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-e2e34241-85c2-4a9d-8583=testing-taint-value-7b9db073-2447-4905-886d-f10d8ae0daca:PreferNoSchedule 02/03/23 01:05:27.307 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-694706ac-c8bd-44c4-bbf0=testing-taint-value-53ee0cb2-da88-4de1-90eb-bba3ae2cbbbf:PreferNoSchedule 02/03/23 01:05:27.338 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-1348da20-47d2-42ef-aea5=testing-taint-value-7ef66f77-985d-43a3-b704-b2f72cfff2ba:PreferNoSchedule 02/03/23 01:05:27.357 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ccd95015-95f0-4ad3-b0f2=testing-taint-value-da90fc13-12c1-4951-8763-70f78a6361be:PreferNoSchedule 02/03/23 01:05:27.374 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-7fe143fc-2f0e-4a4b-ae46=testing-taint-value-c2c6b585-5df5-482f-8fc0-9fd3e26c34df:PreferNoSchedule 02/03/23 01:05:27.415 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-e3b3d62a-a2c1-4525-a4b9=testing-taint-value-9d31f54c-002b-4aa2-814e-06f517086ab9:PreferNoSchedule 02/03/23 01:05:27.565 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-cf49c1b4-d68b-4f38-b39d=testing-taint-value-398fbf58-e33b-4c30-a6f1-7cfc1644d605:PreferNoSchedule 02/03/23 01:05:27.715 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-7b41a212-07ff-4769-9095=testing-taint-value-840964a4-0783-41a3-98bb-200f7ac82523:PreferNoSchedule 02/03/23 01:05:27.866 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-54cea8b3-fad8-43c4-8fcc=testing-taint-value-5b02de93-2174-430a-a4de-94981b4657dd:PreferNoSchedule 02/03/23 01:05:28.016 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-0d37d2af-0361-4300-85e3=testing-taint-value-947ba39e-5a13-4f63-919f-51a5a99d0e9d:PreferNoSchedule 02/03/23 01:05:28.165 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:187 Feb 3 01:05:30.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-7211" for this suite. 02/03/23 01:05:30.317 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","completed":11,"skipped":6063,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for"]} ------------------------------ • [SLOW TEST] [71.007 seconds] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate test/e2e/scheduling/priorities.go:208 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 01:04:19.316 Feb 3 01:04:19.316: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 02/03/23 01:04:19.317 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 01:04:19.327 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 01:04:19.33 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 Feb 3 01:04:19.333: INFO: Waiting up to 1m0s for all nodes to be ready Feb 3 01:05:19.359: INFO: Waiting for terminating namespaces to be deleted... Feb 3 01:05:19.362: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 3 01:05:19.374: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 3 01:05:19.374: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 3 01:05:19.380: INFO: ComputeCPUMemFraction for node: v125-worker Feb 3 01:05:19.380: INFO: Pod for on the node: create-loop-devs-d5nrm, Cpu: 100, Mem: 209715200 Feb 3 01:05:19.380: INFO: Pod for on the node: kindnet-xhfn8, Cpu: 100, Mem: 52428800 Feb 3 01:05:19.380: INFO: Pod for on the node: kube-proxy-pxrcg, Cpu: 100, Mem: 209715200 Feb 3 01:05:19.380: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:05:19.380: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 Feb 3 01:05:19.380: INFO: ComputeCPUMemFraction for node: v125-worker2 Feb 3 01:05:19.380: INFO: Pod for on the node: create-loop-devs-tlwgp, Cpu: 100, Mem: 209715200 Feb 3 01:05:19.380: INFO: Pod for on the node: kindnet-h8fbr, Cpu: 100, Mem: 52428800 Feb 3 01:05:19.380: INFO: Pod for on the node: kube-proxy-bvl9x, Cpu: 100, Mem: 209715200 Feb 3 01:05:19.380: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:05:19.380: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 [It] Pod should be preferably scheduled to nodes pod can tolerate test/e2e/scheduling/priorities.go:208 Feb 3 01:05:19.389: INFO: ComputeCPUMemFraction for node: v125-worker Feb 3 01:05:19.389: INFO: Pod for on the node: create-loop-devs-d5nrm, Cpu: 100, Mem: 209715200 Feb 3 01:05:19.389: INFO: Pod for on the node: kindnet-xhfn8, Cpu: 100, Mem: 52428800 Feb 3 01:05:19.389: INFO: Pod for on the node: kube-proxy-pxrcg, Cpu: 100, Mem: 209715200 Feb 3 01:05:19.389: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:05:19.389: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 Feb 3 01:05:19.389: INFO: ComputeCPUMemFraction for node: v125-worker2 Feb 3 01:05:19.389: INFO: Pod for on the node: create-loop-devs-tlwgp, Cpu: 100, Mem: 209715200 Feb 3 01:05:19.389: INFO: Pod for on the node: kindnet-h8fbr, Cpu: 100, Mem: 52428800 Feb 3 01:05:19.389: INFO: Pod for on the node: kube-proxy-bvl9x, Cpu: 100, Mem: 209715200 Feb 3 01:05:19.389: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Feb 3 01:05:19.389: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412094976, memFraction: 0.0023332074170962494 Feb 3 01:05:19.401: INFO: Waiting for running... Feb 3 01:05:19.401: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. 02/03/23 01:05:24.462 Feb 3 01:05:24.463: INFO: ComputeCPUMemFraction for node: v125-worker Feb 3 01:05:24.463: INFO: Pod for on the node: create-loop-devs-d5nrm, Cpu: 100, Mem: 209715200 Feb 3 01:05:24.463: INFO: Pod for on the node: kindnet-xhfn8, Cpu: 100, Mem: 52428800 Feb 3 01:05:24.463: INFO: Pod for on the node: kube-proxy-pxrcg, Cpu: 100, Mem: 209715200 Feb 3 01:05:24.463: INFO: Pod for on the node: 20ec6083-0ba4-4256-b452-d294a644a5aa-0, Cpu: 43800, Mem: 33561344000 Feb 3 01:05:24.463: INFO: Node: v125-worker, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 Feb 3 01:05:24.463: INFO: Node: v125-worker, totalRequestedMemResource: 33718630400, memAllocatableVal: 67412094976, memFraction: 0.5001866565933677 STEP: Compute Cpu, Mem Fraction after create balanced pods. 02/03/23 01:05:24.463 Feb 3 01:05:24.463: INFO: ComputeCPUMemFraction for node: v125-worker2 Feb 3 01:05:24.463: INFO: Pod for on the node: create-loop-devs-tlwgp, Cpu: 100, Mem: 209715200 Feb 3 01:05:24.463: INFO: Pod for on the node: kindnet-h8fbr, Cpu: 100, Mem: 52428800 Feb 3 01:05:24.463: INFO: Pod for on the node: kube-proxy-bvl9x, Cpu: 100, Mem: 209715200 Feb 3 01:05:24.463: INFO: Pod for on the node: b052ead4-4216-49b3-8577-a4b3ce8ece1b-0, Cpu: 43800, Mem: 33561344000 Feb 3 01:05:24.463: INFO: Node: v125-worker2, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 Feb 3 01:05:24.463: INFO: Node: v125-worker2, totalRequestedMemResource: 33718630400, memAllocatableVal: 67412094976, memFraction: 0.5001866565933677 STEP: Trying to apply 10 (tolerable) taints on the first node. 02/03/23 01:05:24.463 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-e2e34241-85c2-4a9d-8583=testing-taint-value-7b9db073-2447-4905-886d-f10d8ae0daca:PreferNoSchedule 02/03/23 01:05:24.478 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-694706ac-c8bd-44c4-bbf0=testing-taint-value-53ee0cb2-da88-4de1-90eb-bba3ae2cbbbf:PreferNoSchedule 02/03/23 01:05:24.496 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-1348da20-47d2-42ef-aea5=testing-taint-value-7ef66f77-985d-43a3-b704-b2f72cfff2ba:PreferNoSchedule 02/03/23 01:05:24.514 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ccd95015-95f0-4ad3-b0f2=testing-taint-value-da90fc13-12c1-4951-8763-70f78a6361be:PreferNoSchedule 02/03/23 01:05:24.532 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-7fe143fc-2f0e-4a4b-ae46=testing-taint-value-c2c6b585-5df5-482f-8fc0-9fd3e26c34df:PreferNoSchedule 02/03/23 01:05:24.55 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-e3b3d62a-a2c1-4525-a4b9=testing-taint-value-9d31f54c-002b-4aa2-814e-06f517086ab9:PreferNoSchedule 02/03/23 01:05:24.568 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-cf49c1b4-d68b-4f38-b39d=testing-taint-value-398fbf58-e33b-4c30-a6f1-7cfc1644d605:PreferNoSchedule 02/03/23 01:05:24.586 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-7b41a212-07ff-4769-9095=testing-taint-value-840964a4-0783-41a3-98bb-200f7ac82523:PreferNoSchedule 02/03/23 01:05:24.604 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-54cea8b3-fad8-43c4-8fcc=testing-taint-value-5b02de93-2174-430a-a4de-94981b4657dd:PreferNoSchedule 02/03/23 01:05:24.623 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-0d37d2af-0361-4300-85e3=testing-taint-value-947ba39e-5a13-4f63-919f-51a5a99d0e9d:PreferNoSchedule 02/03/23 01:05:24.641 STEP: Adding 10 intolerable taints to all other nodes 02/03/23 01:05:24.645 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ca51189c-e8ec-4552-aaa4=testing-taint-value-e9eaef3e-8013-4009-820f-cb4e0c5f25b1:PreferNoSchedule 02/03/23 01:05:24.658 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-9b6c4388-35d4-41d2-8c73=testing-taint-value-31a7d178-8419-4a05-903d-cd6364cb9fd2:PreferNoSchedule 02/03/23 01:05:24.676 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-7054d269-46d4-4e14-ba3e=testing-taint-value-e622f1d3-986e-48bf-80b8-d7c53ae0b34f:PreferNoSchedule 02/03/23 01:05:24.695 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ba1c0ad3-3c05-489d-a231=testing-taint-value-e68b2fb3-6273-4d29-aa5e-eaabf9f19208:PreferNoSchedule 02/03/23 01:05:24.713 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ef9c2138-42b3-4f80-87c4=testing-taint-value-ec2c5715-9eb8-47cd-8af1-c22f0e3bbde7:PreferNoSchedule 02/03/23 01:05:24.731 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-69300830-ea48-4d51-8592=testing-taint-value-32e8e722-6be8-48b9-8c33-9b04cc3f0b19:PreferNoSchedule 02/03/23 01:05:24.749 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-dad52e73-5bb5-4be7-9d12=testing-taint-value-83c34b91-42be-4a0f-8e15-858cde311ba4:PreferNoSchedule 02/03/23 01:05:24.767 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-2626ed5f-bddd-479b-ac07=testing-taint-value-e30ecb1f-c6c1-423d-9305-a0ae3a8127ef:PreferNoSchedule 02/03/23 01:05:24.785 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-df8cb1df-9f5d-4e4d-977c=testing-taint-value-bb223229-b521-43d6-a3df-6e716c49938b:PreferNoSchedule 02/03/23 01:05:24.816 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-5ca9876c-5519-4eaa-91c5=testing-taint-value-82f432df-cf65-4656-86c0-b9645db08c9b:PreferNoSchedule 02/03/23 01:05:24.966 STEP: Create a pod that tolerates all the taints of the first node. 02/03/23 01:05:25.008 Feb 3 01:05:25.060: INFO: Waiting up to 5m0s for pod "with-tolerations" in namespace "sched-priority-7211" to be "running" Feb 3 01:05:25.108: INFO: Pod "with-tolerations": Phase="Pending", Reason="", readiness=false. Elapsed: 47.481996ms Feb 3 01:05:27.111: INFO: Pod "with-tolerations": Phase="Running", Reason="", readiness=true. Elapsed: 2.050971045s Feb 3 01:05:27.111: INFO: Pod "with-tolerations" satisfied condition "running" STEP: Pod should prefer scheduled to the node that pod can tolerate. 02/03/23 01:05:27.111 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ca51189c-e8ec-4552-aaa4=testing-taint-value-e9eaef3e-8013-4009-820f-cb4e0c5f25b1:PreferNoSchedule 02/03/23 01:05:27.13 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-9b6c4388-35d4-41d2-8c73=testing-taint-value-31a7d178-8419-4a05-903d-cd6364cb9fd2:PreferNoSchedule 02/03/23 01:05:27.149 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-7054d269-46d4-4e14-ba3e=testing-taint-value-e622f1d3-986e-48bf-80b8-d7c53ae0b34f:PreferNoSchedule 02/03/23 01:05:27.167 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ba1c0ad3-3c05-489d-a231=testing-taint-value-e68b2fb3-6273-4d29-aa5e-eaabf9f19208:PreferNoSchedule 02/03/23 01:05:27.19 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ef9c2138-42b3-4f80-87c4=testing-taint-value-ec2c5715-9eb8-47cd-8af1-c22f0e3bbde7:PreferNoSchedule 02/03/23 01:05:27.207 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-69300830-ea48-4d51-8592=testing-taint-value-32e8e722-6be8-48b9-8c33-9b04cc3f0b19:PreferNoSchedule 02/03/23 01:05:27.225 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-dad52e73-5bb5-4be7-9d12=testing-taint-value-83c34b91-42be-4a0f-8e15-858cde311ba4:PreferNoSchedule 02/03/23 01:05:27.241 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-2626ed5f-bddd-479b-ac07=testing-taint-value-e30ecb1f-c6c1-423d-9305-a0ae3a8127ef:PreferNoSchedule 02/03/23 01:05:27.258 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-df8cb1df-9f5d-4e4d-977c=testing-taint-value-bb223229-b521-43d6-a3df-6e716c49938b:PreferNoSchedule 02/03/23 01:05:27.274 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-5ca9876c-5519-4eaa-91c5=testing-taint-value-82f432df-cf65-4656-86c0-b9645db08c9b:PreferNoSchedule 02/03/23 01:05:27.29 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-e2e34241-85c2-4a9d-8583=testing-taint-value-7b9db073-2447-4905-886d-f10d8ae0daca:PreferNoSchedule 02/03/23 01:05:27.307 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-694706ac-c8bd-44c4-bbf0=testing-taint-value-53ee0cb2-da88-4de1-90eb-bba3ae2cbbbf:PreferNoSchedule 02/03/23 01:05:27.338 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-1348da20-47d2-42ef-aea5=testing-taint-value-7ef66f77-985d-43a3-b704-b2f72cfff2ba:PreferNoSchedule 02/03/23 01:05:27.357 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ccd95015-95f0-4ad3-b0f2=testing-taint-value-da90fc13-12c1-4951-8763-70f78a6361be:PreferNoSchedule 02/03/23 01:05:27.374 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-7fe143fc-2f0e-4a4b-ae46=testing-taint-value-c2c6b585-5df5-482f-8fc0-9fd3e26c34df:PreferNoSchedule 02/03/23 01:05:27.415 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-e3b3d62a-a2c1-4525-a4b9=testing-taint-value-9d31f54c-002b-4aa2-814e-06f517086ab9:PreferNoSchedule 02/03/23 01:05:27.565 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-cf49c1b4-d68b-4f38-b39d=testing-taint-value-398fbf58-e33b-4c30-a6f1-7cfc1644d605:PreferNoSchedule 02/03/23 01:05:27.715 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-7b41a212-07ff-4769-9095=testing-taint-value-840964a4-0783-41a3-98bb-200f7ac82523:PreferNoSchedule 02/03/23 01:05:27.866 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-54cea8b3-fad8-43c4-8fcc=testing-taint-value-5b02de93-2174-430a-a4de-94981b4657dd:PreferNoSchedule 02/03/23 01:05:28.016 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-0d37d2af-0361-4300-85e3=testing-taint-value-947ba39e-5a13-4f63-919f-51a5a99d0e9d:PreferNoSchedule 02/03/23 01:05:28.165 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:187 Feb 3 01:05:30.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-7211" for this suite. 02/03/23 01:05:30.317 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [SynchronizedAfterSuite] test/e2e/e2e.go:87 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:87 {"msg":"Test Suite completed","completed":11,"skipped":7054,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for"]} Feb 3 01:05:30.386: INFO: Running AfterSuite actions on all nodes Feb 3 01:05:30.386: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func20.2 Feb 3 01:05:30.386: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func10.2 Feb 3 01:05:30.386: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Feb 3 01:05:30.386: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Feb 3 01:05:30.386: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Feb 3 01:05:30.386: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Feb 3 01:05:30.386: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:87 Feb 3 01:05:30.386: INFO: Running AfterSuite actions on node 1 Feb 3 01:05:30.386: INFO: Skipping dumping logs from cluster ------------------------------ [SynchronizedAfterSuite] PASSED [0.000 seconds] [SynchronizedAfterSuite] test/e2e/e2e.go:87 Begin Captured GinkgoWriter Output >> [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:87 Feb 3 01:05:30.386: INFO: Running AfterSuite actions on all nodes Feb 3 01:05:30.386: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func20.2 Feb 3 01:05:30.386: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func10.2 Feb 3 01:05:30.386: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Feb 3 01:05:30.386: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Feb 3 01:05:30.386: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Feb 3 01:05:30.386: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Feb 3 01:05:30.386: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:87 Feb 3 01:05:30.386: INFO: Running AfterSuite actions on node 1 Feb 3 01:05:30.386: INFO: Skipping dumping logs from cluster << End Captured GinkgoWriter Output ------------------------------ [ReportAfterSuite] Kubernetes e2e suite report test/e2e/e2e_test.go:146 [ReportAfterSuite] TOP-LEVEL test/e2e/e2e_test.go:146 ------------------------------ [ReportAfterSuite] PASSED [0.000 seconds] [ReportAfterSuite] Kubernetes e2e suite report test/e2e/e2e_test.go:146 Begin Captured GinkgoWriter Output >> [ReportAfterSuite] TOP-LEVEL test/e2e/e2e_test.go:146 << End Captured GinkgoWriter Output ------------------------------ [ReportAfterSuite] Kubernetes e2e JUnit report test/e2e/framework/test_context.go:559 [ReportAfterSuite] TOP-LEVEL test/e2e/framework/test_context.go:559 ------------------------------ [ReportAfterSuite] PASSED [0.146 seconds] [ReportAfterSuite] Kubernetes e2e JUnit report test/e2e/framework/test_context.go:559 Begin Captured GinkgoWriter Output >> [ReportAfterSuite] TOP-LEVEL test/e2e/framework/test_context.go:559 << End Captured GinkgoWriter Output ------------------------------ Summarizing 1 Failure: [FAIL] [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run [BeforeEach] verify pod overhead is accounted for test/e2e/scheduling/predicates.go:248 Ran 12 of 7066 Specs in 358.302 seconds FAIL! -- 11 Passed | 1 Failed | 0 Pending | 7054 Skipped --- FAIL: TestE2E (358.60s) FAIL Ginkgo ran 1 suite in 5m58.709367861s Test Suite Failed