I0505 01:02:53.169364 17 e2e.go:116] Starting e2e run "eb7d3fb3-67e8-4cfd-aead-3795137a660d" on Ginkgo node 1 May 5 01:02:53.187: INFO: Enabling in-tree volume drivers Running Suite: Kubernetes e2e suite - /usr/local/bin ==================================================== Random Seed: 1683248573 - will randomize all specs Will run 14 of 7066 specs ------------------------------ [SynchronizedBeforeSuite] test/e2e/e2e.go:76 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 {"msg":"Test Suite starting","completed":0,"skipped":0,"failed":0} May 5 01:02:53.316: INFO: >>> kubeConfig: /home/xtesting/.kube/config May 5 01:02:53.318: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 5 01:02:53.345: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 5 01:02:53.378: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 5 01:02:53.378: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 5 01:02:53.378: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 5 01:02:53.384: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) May 5 01:02:53.384: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 5 01:02:53.384: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 5 01:02:53.384: INFO: e2e test version: v1.25.8 May 5 01:02:53.386: INFO: kube-apiserver version: v1.25.2 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 May 5 01:02:53.386: INFO: >>> kubeConfig: /home/xtesting/.kube/config May 5 01:02:53.391: INFO: Cluster IP family: ipv4 ------------------------------ [SynchronizedBeforeSuite] PASSED [0.075 seconds] [SynchronizedBeforeSuite] test/e2e/e2e.go:76 Begin Captured GinkgoWriter Output >> [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 May 5 01:02:53.316: INFO: >>> kubeConfig: /home/xtesting/.kube/config May 5 01:02:53.318: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 5 01:02:53.345: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 5 01:02:53.378: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 5 01:02:53.378: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 5 01:02:53.378: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 5 01:02:53.384: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) May 5 01:02:53.384: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 5 01:02:53.384: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 5 01:02:53.384: INFO: e2e test version: v1.25.8 May 5 01:02:53.386: INFO: kube-apiserver version: v1.25.2 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 May 5 01:02:53.386: INFO: >>> kubeConfig: /home/xtesting/.kube/config May 5 01:02:53.391: INFO: Cluster IP family: ipv4 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol test/e2e/scheduling/predicates.go:660 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:02:53.457 May 5 01:02:53.457: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 05/05/23 01:02:53.458 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:02:53.47 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:02:53.473 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 May 5 01:02:53.476: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 01:02:53.485: INFO: Waiting for terminating namespaces to be deleted... May 5 01:02:53.488: INFO: Logging pods the apiserver thinks is on node v125-worker before test May 5 01:02:53.494: INFO: create-loop-devs-9mv4v from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:02:53.494: INFO: Container loopdev ready: true, restart count 0 May 5 01:02:53.494: INFO: kindnet-m8hwr from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:02:53.494: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:02:53.494: INFO: kube-proxy-kzswj from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:02:53.494: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:02:53.494: INFO: back-off-cap from pods-2166 started at 2023-05-05 00:35:59 +0000 UTC (1 container statuses recorded) May 5 01:02:53.494: INFO: Container back-off-cap ready: false, restart count 10 May 5 01:02:53.494: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test May 5 01:02:53.499: INFO: create-loop-devs-cfx6b from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:02:53.499: INFO: Container loopdev ready: true, restart count 0 May 5 01:02:53.499: INFO: kindnet-4spxt from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:02:53.499: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:02:53.499: INFO: kube-proxy-df52h from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:02:53.499: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol test/e2e/scheduling/predicates.go:660 STEP: Trying to launch a pod without a label to get a node which can launch it. 05/05/23 01:02:53.499 May 5 01:02:53.507: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-1044" to be "running" May 5 01:02:53.510: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.015888ms May 5 01:02:55.514: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007134165s May 5 01:02:55.514: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:02:55.517 STEP: Trying to apply a random label on the found node. 05/05/23 01:02:55.53 STEP: verifying the node has the label kubernetes.io/e2e-ac4db0eb-6333-42e8-bfe7-ec2cb0233910 90 05/05/23 01:02:55.541 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled 05/05/23 01:02:55.545 May 5 01:02:55.550: INFO: Waiting up to 5m0s for pod "pod1" in namespace "sched-pred-1044" to be "not pending" May 5 01:02:55.553: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.147756ms May 5 01:02:57.558: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.008147529s May 5 01:02:57.558: INFO: Pod "pod1" satisfied condition "not pending" STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.20.0.11 on the node which pod1 resides and expect scheduled 05/05/23 01:02:57.558 May 5 01:02:57.563: INFO: Waiting up to 5m0s for pod "pod2" in namespace "sched-pred-1044" to be "not pending" May 5 01:02:57.567: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.984089ms May 5 01:02:59.572: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.008381756s May 5 01:02:59.572: INFO: Pod "pod2" satisfied condition "not pending" STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.20.0.11 but use UDP protocol on the node which pod2 resides 05/05/23 01:02:59.572 May 5 01:02:59.577: INFO: Waiting up to 5m0s for pod "pod3" in namespace "sched-pred-1044" to be "not pending" May 5 01:02:59.580: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.213557ms May 5 01:03:01.585: INFO: Pod "pod3": Phase="Running", Reason="", readiness=false. Elapsed: 2.00777797s May 5 01:03:01.585: INFO: Pod "pod3" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-ac4db0eb-6333-42e8-bfe7-ec2cb0233910 off the node v125-worker2 05/05/23 01:03:01.585 STEP: verifying the node doesn't have the label kubernetes.io/e2e-ac4db0eb-6333-42e8-bfe7-ec2cb0233910 05/05/23 01:03:01.599 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 May 5 01:03:01.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1044" for this suite. 05/05/23 01:03:01.607 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","completed":1,"skipped":267,"failed":0} ------------------------------ • [SLOW TEST] [8.156 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol test/e2e/scheduling/predicates.go:660 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:02:53.457 May 5 01:02:53.457: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 05/05/23 01:02:53.458 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:02:53.47 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:02:53.473 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 May 5 01:02:53.476: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 01:02:53.485: INFO: Waiting for terminating namespaces to be deleted... May 5 01:02:53.488: INFO: Logging pods the apiserver thinks is on node v125-worker before test May 5 01:02:53.494: INFO: create-loop-devs-9mv4v from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:02:53.494: INFO: Container loopdev ready: true, restart count 0 May 5 01:02:53.494: INFO: kindnet-m8hwr from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:02:53.494: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:02:53.494: INFO: kube-proxy-kzswj from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:02:53.494: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:02:53.494: INFO: back-off-cap from pods-2166 started at 2023-05-05 00:35:59 +0000 UTC (1 container statuses recorded) May 5 01:02:53.494: INFO: Container back-off-cap ready: false, restart count 10 May 5 01:02:53.494: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test May 5 01:02:53.499: INFO: create-loop-devs-cfx6b from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:02:53.499: INFO: Container loopdev ready: true, restart count 0 May 5 01:02:53.499: INFO: kindnet-4spxt from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:02:53.499: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:02:53.499: INFO: kube-proxy-df52h from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:02:53.499: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol test/e2e/scheduling/predicates.go:660 STEP: Trying to launch a pod without a label to get a node which can launch it. 05/05/23 01:02:53.499 May 5 01:02:53.507: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-1044" to be "running" May 5 01:02:53.510: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.015888ms May 5 01:02:55.514: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007134165s May 5 01:02:55.514: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:02:55.517 STEP: Trying to apply a random label on the found node. 05/05/23 01:02:55.53 STEP: verifying the node has the label kubernetes.io/e2e-ac4db0eb-6333-42e8-bfe7-ec2cb0233910 90 05/05/23 01:02:55.541 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled 05/05/23 01:02:55.545 May 5 01:02:55.550: INFO: Waiting up to 5m0s for pod "pod1" in namespace "sched-pred-1044" to be "not pending" May 5 01:02:55.553: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.147756ms May 5 01:02:57.558: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.008147529s May 5 01:02:57.558: INFO: Pod "pod1" satisfied condition "not pending" STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.20.0.11 on the node which pod1 resides and expect scheduled 05/05/23 01:02:57.558 May 5 01:02:57.563: INFO: Waiting up to 5m0s for pod "pod2" in namespace "sched-pred-1044" to be "not pending" May 5 01:02:57.567: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.984089ms May 5 01:02:59.572: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.008381756s May 5 01:02:59.572: INFO: Pod "pod2" satisfied condition "not pending" STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.20.0.11 but use UDP protocol on the node which pod2 resides 05/05/23 01:02:59.572 May 5 01:02:59.577: INFO: Waiting up to 5m0s for pod "pod3" in namespace "sched-pred-1044" to be "not pending" May 5 01:02:59.580: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.213557ms May 5 01:03:01.585: INFO: Pod "pod3": Phase="Running", Reason="", readiness=false. Elapsed: 2.00777797s May 5 01:03:01.585: INFO: Pod "pod3" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-ac4db0eb-6333-42e8-bfe7-ec2cb0233910 off the node v125-worker2 05/05/23 01:03:01.585 STEP: verifying the node doesn't have the label kubernetes.io/e2e-ac4db0eb-6333-42e8-bfe7-ec2cb0233910 05/05/23 01:03:01.599 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 May 5 01:03:01.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1044" for this suite. 05/05/23 01:03:01.607 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for test/e2e/scheduling/predicates.go:271 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:03:01.616 May 5 01:03:01.616: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 05/05/23 01:03:01.617 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:03:01.627 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:03:01.63 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 May 5 01:03:01.634: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 01:03:01.640: INFO: Waiting for terminating namespaces to be deleted... May 5 01:03:01.643: INFO: Logging pods the apiserver thinks is on node v125-worker before test May 5 01:03:01.648: INFO: create-loop-devs-9mv4v from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:03:01.648: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:01.648: INFO: kindnet-m8hwr from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:01.648: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:01.648: INFO: kube-proxy-kzswj from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:01.648: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:01.648: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test May 5 01:03:01.654: INFO: create-loop-devs-cfx6b from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:03:01.654: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:01.654: INFO: kindnet-4spxt from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:01.654: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:01.654: INFO: kube-proxy-df52h from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:01.654: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:01.654: INFO: pod1 from sched-pred-1044 started at 2023-05-05 01:02:55 +0000 UTC (1 container statuses recorded) May 5 01:03:01.654: INFO: Container agnhost ready: true, restart count 0 May 5 01:03:01.654: INFO: pod2 from sched-pred-1044 started at 2023-05-05 01:02:57 +0000 UTC (1 container statuses recorded) May 5 01:03:01.654: INFO: Container agnhost ready: true, restart count 0 May 5 01:03:01.654: INFO: pod3 from sched-pred-1044 started at 2023-05-05 01:02:59 +0000 UTC (1 container statuses recorded) May 5 01:03:01.654: INFO: Container agnhost ready: false, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run test/e2e/scheduling/predicates.go:216 STEP: Add RuntimeClass and fake resource 05/05/23 01:03:01.66 STEP: Trying to launch a pod without a label to get a node which can launch it. 05/05/23 01:03:01.66 May 5 01:03:01.667: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-3519" to be "running" May 5 01:03:01.670: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.788173ms May 5 01:03:03.674: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007147816s May 5 01:03:03.674: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:03:03.677 May 5 01:03:03.703: INFO: Unexpected error: failed to create RuntimeClass resource: <*errors.StatusError | 0xc00224df40>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "runtimeclasses.node.k8s.io \"test-handler\" already exists", Reason: "AlreadyExists", Details: { Name: "test-handler", Group: "node.k8s.io", Kind: "runtimeclasses", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 409, }, } May 5 01:03:03.703: FAIL: failed to create RuntimeClass resource: runtimeclasses.node.k8s.io "test-handler" already exists Full Stack Trace k8s.io/kubernetes/test/e2e/scheduling.glob..func4.4.1() test/e2e/scheduling/predicates.go:248 +0x745 [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run test/e2e/scheduling/predicates.go:251 STEP: Remove fake resource and RuntimeClass 05/05/23 01:03:03.704 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 STEP: Collecting events from namespace "sched-pred-3519". 05/05/23 01:03:03.718 STEP: Found 4 events. 05/05/23 01:03:03.721 May 5 01:03:03.721: INFO: At 2023-05-05 01:03:01 +0000 UTC - event for without-label: {default-scheduler } Scheduled: Successfully assigned sched-pred-3519/without-label to v125-worker May 5 01:03:03.721: INFO: At 2023-05-05 01:03:02 +0000 UTC - event for without-label: {kubelet v125-worker} Pulled: Container image "k8s.gcr.io/pause:3.8" already present on machine May 5 01:03:03.721: INFO: At 2023-05-05 01:03:02 +0000 UTC - event for without-label: {kubelet v125-worker} Created: Created container without-label May 5 01:03:03.721: INFO: At 2023-05-05 01:03:02 +0000 UTC - event for without-label: {kubelet v125-worker} Started: Started container without-label May 5 01:03:03.724: INFO: POD NODE PHASE GRACE CONDITIONS May 5 01:03:03.724: INFO: May 5 01:03:03.728: INFO: Logging node info for node v125-control-plane May 5 01:03:03.732: INFO: Node Info: &Node{ObjectMeta:{v125-control-plane bcb05f97-114b-41e1-91dc-e979551fcbdf 5079901 0 2023-03-27 13:20:10 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v125-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-27 13:20:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-27 13:20:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-03-27 13:20:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-05-05 01:02:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v125/v125-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-05-05 01:02:14 +0000 UTC,LastTransitionTime:2023-03-27 13:20:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-05-05 01:02:14 +0000 UTC,LastTransitionTime:2023-03-27 13:20:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-05-05 01:02:14 +0000 UTC,LastTransitionTime:2023-03-27 13:20:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-05-05 01:02:14 +0000 UTC,LastTransitionTime:2023-03-27 13:20:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.5,},NodeAddress{Type:Hostname,Address:v125-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:26b06f34735442b1a80ede53cf43d738,SystemUUID:dc88f2b4-6c1b-4d11-8290-ba24967afa75,BootID:3ece24be-6f26-4926-9346-83e0950952a5,KernelVersion:5.15.0-53-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.25.2,KubeProxyVersion:v1.25.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:d38cf34a86c8798fbd7e7dce374a36ef6da7a1a2f88bf384e66c239d527493d9 registry.k8s.io/kube-apiserver:v1.25.2],SizeBytes:76513774,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:e3c94ddacb1a39f08a66d844b70d29c07327136b7578a3e512a0dde02509bd44 registry.k8s.io/kube-controller-manager:v1.25.2],SizeBytes:64499324,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:63c431d51c9715cd19db89455c69e4277d7282f0cff1fe137170908b4d1dcad1 registry.k8s.io/kube-proxy:v1.25.2],SizeBytes:63270397,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:db84dea8fb911257d8aa41437db54d44dba91d4102ec1872673e7daec026226d registry.k8s.io/kube-scheduler:v1.25.2],SizeBytes:51921020,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d k8s.gcr.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 5 01:03:03.732: INFO: Logging kubelet events for node v125-control-plane May 5 01:03:03.735: INFO: Logging pods the kubelet thinks is on node v125-control-plane May 5 01:03:03.763: INFO: create-loop-devs-fpn7j started at 2023-03-27 13:20:36 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.763: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:03.763: INFO: kube-controller-manager-v125-control-plane started at 2023-03-27 13:20:14 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.763: INFO: Container kube-controller-manager ready: true, restart count 0 May 5 01:03:03.763: INFO: kube-proxy-v4vrh started at 2023-03-27 13:20:26 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.763: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:03.763: INFO: kindnet-bgdzv started at 2023-03-27 13:20:26 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.763: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:03.763: INFO: local-path-provisioner-684f458cdd-cnk5l started at 2023-03-27 13:20:34 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.763: INFO: Container local-path-provisioner ready: true, restart count 0 May 5 01:03:03.763: INFO: coredns-565d847f94-vn2s8 started at 2023-03-27 13:20:34 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.763: INFO: Container coredns ready: true, restart count 0 May 5 01:03:03.763: INFO: etcd-v125-control-plane started at 2023-03-27 13:20:15 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.763: INFO: Container etcd ready: true, restart count 0 May 5 01:03:03.763: INFO: kube-apiserver-v125-control-plane started at 2023-03-27 13:20:15 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.763: INFO: Container kube-apiserver ready: true, restart count 0 May 5 01:03:03.763: INFO: kube-scheduler-v125-control-plane started at 2023-03-27 13:20:14 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.763: INFO: Container kube-scheduler ready: true, restart count 0 May 5 01:03:03.763: INFO: coredns-565d847f94-8zdft started at 2023-03-27 13:20:34 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.763: INFO: Container coredns ready: true, restart count 0 May 5 01:03:03.858: INFO: Latency metrics for node v125-control-plane May 5 01:03:03.858: INFO: Logging node info for node v125-worker May 5 01:03:03.862: INFO: Node Info: &Node{ObjectMeta:{v125-worker 709a38f2-4c59-4e4c-bd7e-2f21949dffde 5080047 0 2023-03-27 13:20:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v125-worker kubernetes.io/os:linux topology.hostpath.csi/node:v125-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-27 13:20:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-27 13:20:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-27 13:20:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-05-05 01:01:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v125/v125-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-05-05 01:01:50 +0000 UTC,LastTransitionTime:2023-03-27 13:20:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-05-05 01:01:50 +0000 UTC,LastTransitionTime:2023-03-27 13:20:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-05-05 01:01:50 +0000 UTC,LastTransitionTime:2023-03-27 13:20:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-05-05 01:01:50 +0000 UTC,LastTransitionTime:2023-03-27 13:20:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.12,},NodeAddress{Type:Hostname,Address:v125-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:07a5bfadbe9d4cce8e9377dd3c3dcbc8,SystemUUID:95b18388-a368-40c6-b230-5cd31a0f9e2d,BootID:3ece24be-6f26-4926-9346-83e0950952a5,KernelVersion:5.15.0-53-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.25.2,KubeProxyVersion:v1.25.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c k8s.gcr.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[k8s.gcr.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 k8s.gcr.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:d38cf34a86c8798fbd7e7dce374a36ef6da7a1a2f88bf384e66c239d527493d9 registry.k8s.io/kube-apiserver:v1.25.2],SizeBytes:76513774,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:e3c94ddacb1a39f08a66d844b70d29c07327136b7578a3e512a0dde02509bd44 registry.k8s.io/kube-controller-manager:v1.25.2],SizeBytes:64499324,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:63c431d51c9715cd19db89455c69e4277d7282f0cff1fe137170908b4d1dcad1 registry.k8s.io/kube-proxy:v1.25.2],SizeBytes:63270397,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:db84dea8fb911257d8aa41437db54d44dba91d4102ec1872673e7daec026226d registry.k8s.io/kube-scheduler:v1.25.2],SizeBytes:51921020,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 k8s.gcr.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:f9c93b92b6ff750b41a93c4e4fe0bfe384597aeb841e2539d5444815c55b2d8f k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5],SizeBytes:24316368,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:6477988532358148d2e98f7c747db4e9250bbc7ad2664bf666348abf9ee1f5aa k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0],SizeBytes:22728994,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6e0546563b18872b0aa0cad7255a26bb9a87cb879b7fc3e2383c867ef4f706fb k8s.gcr.io/sig-storage/csi-resizer:v1.3.0],SizeBytes:21671340,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:80dec81b679a733fda448be92a2331150d99095947d04003ecff3dbd7f2a476a k8s.gcr.io/sig-storage/csi-attacher:v3.3.0],SizeBytes:21444261,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 k8s.gcr.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:f9bcee63734b7b01555ee8fc8fb01ac2922478b2c8934bf8d468dd2916edc405 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0],SizeBytes:8582494,},ContainerImage{Names:[k8s.gcr.io/build-image/distroless-iptables@sha256:38e6b091d238094f081efad3e2b362e6480b2156f5f4fba6ea46835ecdcd47e2 k8s.gcr.io/build-image/distroless-iptables:v0.1.1],SizeBytes:7634231,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:55d0552eb6538050ea7741e46b35d27eccffeeaed7010f9f2bad0a89c149bc6f k8s.gcr.io/e2e-test-images/nginx:1.15-2],SizeBytes:7000509,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d k8s.gcr.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 5 01:03:03.862: INFO: Logging kubelet events for node v125-worker May 5 01:03:03.865: INFO: Logging pods the kubelet thinks is on node v125-worker May 5 01:03:03.887: INFO: kube-proxy-kzswj started at 2023-03-27 13:20:32 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.887: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:03.887: INFO: create-loop-devs-9mv4v started at 2023-03-27 13:20:36 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.887: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:03.887: INFO: kindnet-m8hwr started at 2023-03-27 13:20:32 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.887: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:04.379: INFO: Latency metrics for node v125-worker May 5 01:03:04.379: INFO: Logging node info for node v125-worker2 May 5 01:03:04.383: INFO: Node Info: &Node{ObjectMeta:{v125-worker2 e14028ae-646a-4120-9705-b880cefa6f0a 5080027 0 2023-03-27 13:20:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v125-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:v125-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-27 13:20:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2023-03-27 13:20:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-27 13:20:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {e2e.test Update v1 2023-05-05 00:02:37 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}} status} {kubelet Update v1 2023-05-05 00:58:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:example.com/fakecpu":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v125/v125-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-05-05 00:58:53 +0000 UTC,LastTransitionTime:2023-03-27 13:20:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-05-05 00:58:53 +0000 UTC,LastTransitionTime:2023-03-27 13:20:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-05-05 00:58:53 +0000 UTC,LastTransitionTime:2023-03-27 13:20:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-05-05 00:58:53 +0000 UTC,LastTransitionTime:2023-03-27 13:20:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.11,},NodeAddress{Type:Hostname,Address:v125-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc65d1c9c343d18a55a96360d96c2d,SystemUUID:ce9d964f-c178-4483-8771-40912a915087,BootID:3ece24be-6f26-4926-9346-83e0950952a5,KernelVersion:5.15.0-53-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.25.2,KubeProxyVersion:v1.25.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:7ed3bfb1429e97f721cbd8b2953ffb1f0186e89c1c99ee0e919d563b0caa81d2 k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.3],SizeBytes:151196506,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c k8s.gcr.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[k8s.gcr.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 k8s.gcr.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:d38cf34a86c8798fbd7e7dce374a36ef6da7a1a2f88bf384e66c239d527493d9 registry.k8s.io/kube-apiserver:v1.25.2],SizeBytes:76513774,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:e3c94ddacb1a39f08a66d844b70d29c07327136b7578a3e512a0dde02509bd44 registry.k8s.io/kube-controller-manager:v1.25.2],SizeBytes:64499324,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:63c431d51c9715cd19db89455c69e4277d7282f0cff1fe137170908b4d1dcad1 registry.k8s.io/kube-proxy:v1.25.2],SizeBytes:63270397,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:db84dea8fb911257d8aa41437db54d44dba91d4102ec1872673e7daec026226d registry.k8s.io/kube-scheduler:v1.25.2],SizeBytes:51921020,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 k8s.gcr.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:f9c93b92b6ff750b41a93c4e4fe0bfe384597aeb841e2539d5444815c55b2d8f k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5],SizeBytes:24316368,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:6477988532358148d2e98f7c747db4e9250bbc7ad2664bf666348abf9ee1f5aa k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0],SizeBytes:22728994,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6e0546563b18872b0aa0cad7255a26bb9a87cb879b7fc3e2383c867ef4f706fb k8s.gcr.io/sig-storage/csi-resizer:v1.3.0],SizeBytes:21671340,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:80dec81b679a733fda448be92a2331150d99095947d04003ecff3dbd7f2a476a k8s.gcr.io/sig-storage/csi-attacher:v3.3.0],SizeBytes:21444261,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 k8s.gcr.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:f9bcee63734b7b01555ee8fc8fb01ac2922478b2c8934bf8d468dd2916edc405 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0],SizeBytes:8582494,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:55d0552eb6538050ea7741e46b35d27eccffeeaed7010f9f2bad0a89c149bc6f k8s.gcr.io/e2e-test-images/nginx:1.15-2],SizeBytes:7000509,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d k8s.gcr.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 5 01:03:04.384: INFO: Logging kubelet events for node v125-worker2 May 5 01:03:04.387: INFO: Logging pods the kubelet thinks is on node v125-worker2 May 5 01:03:04.412: INFO: kindnet-4spxt started at 2023-03-27 13:20:32 +0000 UTC (0+1 container statuses recorded) May 5 01:03:04.412: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:04.412: INFO: kube-proxy-df52h started at 2023-03-27 13:20:32 +0000 UTC (0+1 container statuses recorded) May 5 01:03:04.412: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:04.412: INFO: pod2 started at 2023-05-05 01:02:57 +0000 UTC (0+1 container statuses recorded) May 5 01:03:04.412: INFO: Container agnhost ready: true, restart count 0 May 5 01:03:04.412: INFO: create-loop-devs-cfx6b started at 2023-03-27 13:20:36 +0000 UTC (0+1 container statuses recorded) May 5 01:03:04.412: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:04.412: INFO: pod1 started at 2023-05-05 01:02:55 +0000 UTC (0+1 container statuses recorded) May 5 01:03:04.412: INFO: Container agnhost ready: true, restart count 0 May 5 01:03:04.412: INFO: pod3 started at 2023-05-05 01:02:59 +0000 UTC (0+1 container statuses recorded) May 5 01:03:04.412: INFO: Container agnhost ready: true, restart count 0 May 5 01:03:04.852: INFO: Latency metrics for node v125-worker2 May 5 01:03:04.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3519" for this suite. 05/05/23 01:03:04.857 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"FAILED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","completed":1,"skipped":326,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for"]} ------------------------------ • [FAILED] [3.247 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run [BeforeEach] test/e2e/scheduling/predicates.go:216 verify pod overhead is accounted for test/e2e/scheduling/predicates.go:271 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:03:01.616 May 5 01:03:01.616: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 05/05/23 01:03:01.617 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:03:01.627 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:03:01.63 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 May 5 01:03:01.634: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 01:03:01.640: INFO: Waiting for terminating namespaces to be deleted... May 5 01:03:01.643: INFO: Logging pods the apiserver thinks is on node v125-worker before test May 5 01:03:01.648: INFO: create-loop-devs-9mv4v from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:03:01.648: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:01.648: INFO: kindnet-m8hwr from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:01.648: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:01.648: INFO: kube-proxy-kzswj from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:01.648: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:01.648: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test May 5 01:03:01.654: INFO: create-loop-devs-cfx6b from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:03:01.654: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:01.654: INFO: kindnet-4spxt from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:01.654: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:01.654: INFO: kube-proxy-df52h from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:01.654: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:01.654: INFO: pod1 from sched-pred-1044 started at 2023-05-05 01:02:55 +0000 UTC (1 container statuses recorded) May 5 01:03:01.654: INFO: Container agnhost ready: true, restart count 0 May 5 01:03:01.654: INFO: pod2 from sched-pred-1044 started at 2023-05-05 01:02:57 +0000 UTC (1 container statuses recorded) May 5 01:03:01.654: INFO: Container agnhost ready: true, restart count 0 May 5 01:03:01.654: INFO: pod3 from sched-pred-1044 started at 2023-05-05 01:02:59 +0000 UTC (1 container statuses recorded) May 5 01:03:01.654: INFO: Container agnhost ready: false, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run test/e2e/scheduling/predicates.go:216 STEP: Add RuntimeClass and fake resource 05/05/23 01:03:01.66 STEP: Trying to launch a pod without a label to get a node which can launch it. 05/05/23 01:03:01.66 May 5 01:03:01.667: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-3519" to be "running" May 5 01:03:01.670: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.788173ms May 5 01:03:03.674: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007147816s May 5 01:03:03.674: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:03:03.677 May 5 01:03:03.703: INFO: Unexpected error: failed to create RuntimeClass resource: <*errors.StatusError | 0xc00224df40>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "runtimeclasses.node.k8s.io \"test-handler\" already exists", Reason: "AlreadyExists", Details: { Name: "test-handler", Group: "node.k8s.io", Kind: "runtimeclasses", UID: "", Causes: nil, RetryAfterSeconds: 0, }, Code: 409, }, } May 5 01:03:03.703: FAIL: failed to create RuntimeClass resource: runtimeclasses.node.k8s.io "test-handler" already exists Full Stack Trace k8s.io/kubernetes/test/e2e/scheduling.glob..func4.4.1() test/e2e/scheduling/predicates.go:248 +0x745 [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run test/e2e/scheduling/predicates.go:251 STEP: Remove fake resource and RuntimeClass 05/05/23 01:03:03.704 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 STEP: Collecting events from namespace "sched-pred-3519". 05/05/23 01:03:03.718 STEP: Found 4 events. 05/05/23 01:03:03.721 May 5 01:03:03.721: INFO: At 2023-05-05 01:03:01 +0000 UTC - event for without-label: {default-scheduler } Scheduled: Successfully assigned sched-pred-3519/without-label to v125-worker May 5 01:03:03.721: INFO: At 2023-05-05 01:03:02 +0000 UTC - event for without-label: {kubelet v125-worker} Pulled: Container image "k8s.gcr.io/pause:3.8" already present on machine May 5 01:03:03.721: INFO: At 2023-05-05 01:03:02 +0000 UTC - event for without-label: {kubelet v125-worker} Created: Created container without-label May 5 01:03:03.721: INFO: At 2023-05-05 01:03:02 +0000 UTC - event for without-label: {kubelet v125-worker} Started: Started container without-label May 5 01:03:03.724: INFO: POD NODE PHASE GRACE CONDITIONS May 5 01:03:03.724: INFO: May 5 01:03:03.728: INFO: Logging node info for node v125-control-plane May 5 01:03:03.732: INFO: Node Info: &Node{ObjectMeta:{v125-control-plane bcb05f97-114b-41e1-91dc-e979551fcbdf 5079901 0 2023-03-27 13:20:10 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v125-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-03-27 13:20:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-27 13:20:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2023-03-27 13:20:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-05-05 01:02:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v125/v125-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-05-05 01:02:14 +0000 UTC,LastTransitionTime:2023-03-27 13:20:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-05-05 01:02:14 +0000 UTC,LastTransitionTime:2023-03-27 13:20:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-05-05 01:02:14 +0000 UTC,LastTransitionTime:2023-03-27 13:20:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-05-05 01:02:14 +0000 UTC,LastTransitionTime:2023-03-27 13:20:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.5,},NodeAddress{Type:Hostname,Address:v125-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:26b06f34735442b1a80ede53cf43d738,SystemUUID:dc88f2b4-6c1b-4d11-8290-ba24967afa75,BootID:3ece24be-6f26-4926-9346-83e0950952a5,KernelVersion:5.15.0-53-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.25.2,KubeProxyVersion:v1.25.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:d38cf34a86c8798fbd7e7dce374a36ef6da7a1a2f88bf384e66c239d527493d9 registry.k8s.io/kube-apiserver:v1.25.2],SizeBytes:76513774,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:e3c94ddacb1a39f08a66d844b70d29c07327136b7578a3e512a0dde02509bd44 registry.k8s.io/kube-controller-manager:v1.25.2],SizeBytes:64499324,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:63c431d51c9715cd19db89455c69e4277d7282f0cff1fe137170908b4d1dcad1 registry.k8s.io/kube-proxy:v1.25.2],SizeBytes:63270397,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:db84dea8fb911257d8aa41437db54d44dba91d4102ec1872673e7daec026226d registry.k8s.io/kube-scheduler:v1.25.2],SizeBytes:51921020,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d k8s.gcr.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 5 01:03:03.732: INFO: Logging kubelet events for node v125-control-plane May 5 01:03:03.735: INFO: Logging pods the kubelet thinks is on node v125-control-plane May 5 01:03:03.763: INFO: create-loop-devs-fpn7j started at 2023-03-27 13:20:36 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.763: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:03.763: INFO: kube-controller-manager-v125-control-plane started at 2023-03-27 13:20:14 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.763: INFO: Container kube-controller-manager ready: true, restart count 0 May 5 01:03:03.763: INFO: kube-proxy-v4vrh started at 2023-03-27 13:20:26 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.763: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:03.763: INFO: kindnet-bgdzv started at 2023-03-27 13:20:26 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.763: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:03.763: INFO: local-path-provisioner-684f458cdd-cnk5l started at 2023-03-27 13:20:34 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.763: INFO: Container local-path-provisioner ready: true, restart count 0 May 5 01:03:03.763: INFO: coredns-565d847f94-vn2s8 started at 2023-03-27 13:20:34 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.763: INFO: Container coredns ready: true, restart count 0 May 5 01:03:03.763: INFO: etcd-v125-control-plane started at 2023-03-27 13:20:15 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.763: INFO: Container etcd ready: true, restart count 0 May 5 01:03:03.763: INFO: kube-apiserver-v125-control-plane started at 2023-03-27 13:20:15 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.763: INFO: Container kube-apiserver ready: true, restart count 0 May 5 01:03:03.763: INFO: kube-scheduler-v125-control-plane started at 2023-03-27 13:20:14 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.763: INFO: Container kube-scheduler ready: true, restart count 0 May 5 01:03:03.763: INFO: coredns-565d847f94-8zdft started at 2023-03-27 13:20:34 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.763: INFO: Container coredns ready: true, restart count 0 May 5 01:03:03.858: INFO: Latency metrics for node v125-control-plane May 5 01:03:03.858: INFO: Logging node info for node v125-worker May 5 01:03:03.862: INFO: Node Info: &Node{ObjectMeta:{v125-worker 709a38f2-4c59-4e4c-bd7e-2f21949dffde 5080047 0 2023-03-27 13:20:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v125-worker kubernetes.io/os:linux topology.hostpath.csi/node:v125-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-27 13:20:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2023-03-27 13:20:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-27 13:20:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2023-05-05 01:01:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v125/v125-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-05-05 01:01:50 +0000 UTC,LastTransitionTime:2023-03-27 13:20:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-05-05 01:01:50 +0000 UTC,LastTransitionTime:2023-03-27 13:20:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-05-05 01:01:50 +0000 UTC,LastTransitionTime:2023-03-27 13:20:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-05-05 01:01:50 +0000 UTC,LastTransitionTime:2023-03-27 13:20:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.12,},NodeAddress{Type:Hostname,Address:v125-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:07a5bfadbe9d4cce8e9377dd3c3dcbc8,SystemUUID:95b18388-a368-40c6-b230-5cd31a0f9e2d,BootID:3ece24be-6f26-4926-9346-83e0950952a5,KernelVersion:5.15.0-53-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.25.2,KubeProxyVersion:v1.25.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c k8s.gcr.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[k8s.gcr.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 k8s.gcr.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:d38cf34a86c8798fbd7e7dce374a36ef6da7a1a2f88bf384e66c239d527493d9 registry.k8s.io/kube-apiserver:v1.25.2],SizeBytes:76513774,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:e3c94ddacb1a39f08a66d844b70d29c07327136b7578a3e512a0dde02509bd44 registry.k8s.io/kube-controller-manager:v1.25.2],SizeBytes:64499324,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:63c431d51c9715cd19db89455c69e4277d7282f0cff1fe137170908b4d1dcad1 registry.k8s.io/kube-proxy:v1.25.2],SizeBytes:63270397,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:db84dea8fb911257d8aa41437db54d44dba91d4102ec1872673e7daec026226d registry.k8s.io/kube-scheduler:v1.25.2],SizeBytes:51921020,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 k8s.gcr.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:f9c93b92b6ff750b41a93c4e4fe0bfe384597aeb841e2539d5444815c55b2d8f k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5],SizeBytes:24316368,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:6477988532358148d2e98f7c747db4e9250bbc7ad2664bf666348abf9ee1f5aa k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0],SizeBytes:22728994,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6e0546563b18872b0aa0cad7255a26bb9a87cb879b7fc3e2383c867ef4f706fb k8s.gcr.io/sig-storage/csi-resizer:v1.3.0],SizeBytes:21671340,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:80dec81b679a733fda448be92a2331150d99095947d04003ecff3dbd7f2a476a k8s.gcr.io/sig-storage/csi-attacher:v3.3.0],SizeBytes:21444261,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 k8s.gcr.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:f9bcee63734b7b01555ee8fc8fb01ac2922478b2c8934bf8d468dd2916edc405 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0],SizeBytes:8582494,},ContainerImage{Names:[k8s.gcr.io/build-image/distroless-iptables@sha256:38e6b091d238094f081efad3e2b362e6480b2156f5f4fba6ea46835ecdcd47e2 k8s.gcr.io/build-image/distroless-iptables:v0.1.1],SizeBytes:7634231,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:55d0552eb6538050ea7741e46b35d27eccffeeaed7010f9f2bad0a89c149bc6f k8s.gcr.io/e2e-test-images/nginx:1.15-2],SizeBytes:7000509,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d k8s.gcr.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 5 01:03:03.862: INFO: Logging kubelet events for node v125-worker May 5 01:03:03.865: INFO: Logging pods the kubelet thinks is on node v125-worker May 5 01:03:03.887: INFO: kube-proxy-kzswj started at 2023-03-27 13:20:32 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.887: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:03.887: INFO: create-loop-devs-9mv4v started at 2023-03-27 13:20:36 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.887: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:03.887: INFO: kindnet-m8hwr started at 2023-03-27 13:20:32 +0000 UTC (0+1 container statuses recorded) May 5 01:03:03.887: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:04.379: INFO: Latency metrics for node v125-worker May 5 01:03:04.379: INFO: Logging node info for node v125-worker2 May 5 01:03:04.383: INFO: Node Info: &Node{ObjectMeta:{v125-worker2 e14028ae-646a-4120-9705-b880cefa6f0a 5080027 0 2023-03-27 13:20:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v125-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:v125-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-03-27 13:20:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2023-03-27 13:20:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2023-03-27 13:20:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {e2e.test Update v1 2023-05-05 00:02:37 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}} status} {kubelet Update v1 2023-05-05 00:58:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:example.com/fakecpu":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v125/v125-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-05-05 00:58:53 +0000 UTC,LastTransitionTime:2023-03-27 13:20:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-05-05 00:58:53 +0000 UTC,LastTransitionTime:2023-03-27 13:20:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-05-05 00:58:53 +0000 UTC,LastTransitionTime:2023-03-27 13:20:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-05-05 00:58:53 +0000 UTC,LastTransitionTime:2023-03-27 13:20:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.0.11,},NodeAddress{Type:Hostname,Address:v125-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc65d1c9c343d18a55a96360d96c2d,SystemUUID:ce9d964f-c178-4483-8771-40912a915087,BootID:3ece24be-6f26-4926-9346-83e0950952a5,KernelVersion:5.15.0-53-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.25.2,KubeProxyVersion:v1.25.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:7ed3bfb1429e97f721cbd8b2953ffb1f0186e89c1c99ee0e919d563b0caa81d2 k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.3],SizeBytes:151196506,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c k8s.gcr.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.4-0],SizeBytes:102157811,},ContainerImage{Names:[k8s.gcr.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 k8s.gcr.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:d38cf34a86c8798fbd7e7dce374a36ef6da7a1a2f88bf384e66c239d527493d9 registry.k8s.io/kube-apiserver:v1.25.2],SizeBytes:76513774,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:e3c94ddacb1a39f08a66d844b70d29c07327136b7578a3e512a0dde02509bd44 registry.k8s.io/kube-controller-manager:v1.25.2],SizeBytes:64499324,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:63c431d51c9715cd19db89455c69e4277d7282f0cff1fe137170908b4d1dcad1 registry.k8s.io/kube-proxy:v1.25.2],SizeBytes:63270397,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:db84dea8fb911257d8aa41437db54d44dba91d4102ec1872673e7daec026226d registry.k8s.io/kube-scheduler:v1.25.2],SizeBytes:51921020,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:af7e3857d87770ddb40f5ea4f89b5a2709504ab1ee31f9ea4ab5823c045f2146 k8s.gcr.io/e2e-test-images/agnhost:2.40],SizeBytes:51155161,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:f9c93b92b6ff750b41a93c4e4fe0bfe384597aeb841e2539d5444815c55b2d8f k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5],SizeBytes:24316368,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:6477988532358148d2e98f7c747db4e9250bbc7ad2664bf666348abf9ee1f5aa k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0],SizeBytes:22728994,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6e0546563b18872b0aa0cad7255a26bb9a87cb879b7fc3e2383c867ef4f706fb k8s.gcr.io/sig-storage/csi-resizer:v1.3.0],SizeBytes:21671340,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:80dec81b679a733fda448be92a2331150d99095947d04003ecff3dbd7f2a476a k8s.gcr.io/sig-storage/csi-attacher:v3.3.0],SizeBytes:21444261,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:6029c252dae6178c99b580de72d7776158edbc81be0de15cedc4152a3acfed18 k8s.gcr.io/sig-storage/hostpathplugin:v1.7.3],SizeBytes:15224494,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:f9bcee63734b7b01555ee8fc8fb01ac2922478b2c8934bf8d468dd2916edc405 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.3.0],SizeBytes:8582494,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:55d0552eb6538050ea7741e46b35d27eccffeeaed7010f9f2bad0a89c149bc6f k8s.gcr.io/e2e-test-images/nginx:1.15-2],SizeBytes:7000509,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d k8s.gcr.io/pause:3.8],SizeBytes:311286,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 5 01:03:04.384: INFO: Logging kubelet events for node v125-worker2 May 5 01:03:04.387: INFO: Logging pods the kubelet thinks is on node v125-worker2 May 5 01:03:04.412: INFO: kindnet-4spxt started at 2023-03-27 13:20:32 +0000 UTC (0+1 container statuses recorded) May 5 01:03:04.412: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:04.412: INFO: kube-proxy-df52h started at 2023-03-27 13:20:32 +0000 UTC (0+1 container statuses recorded) May 5 01:03:04.412: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:04.412: INFO: pod2 started at 2023-05-05 01:02:57 +0000 UTC (0+1 container statuses recorded) May 5 01:03:04.412: INFO: Container agnhost ready: true, restart count 0 May 5 01:03:04.412: INFO: create-loop-devs-cfx6b started at 2023-03-27 13:20:36 +0000 UTC (0+1 container statuses recorded) May 5 01:03:04.412: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:04.412: INFO: pod1 started at 2023-05-05 01:02:55 +0000 UTC (0+1 container statuses recorded) May 5 01:03:04.412: INFO: Container agnhost ready: true, restart count 0 May 5 01:03:04.412: INFO: pod3 started at 2023-05-05 01:02:59 +0000 UTC (0+1 container statuses recorded) May 5 01:03:04.412: INFO: Container agnhost ready: true, restart count 0 May 5 01:03:04.852: INFO: Latency metrics for node v125-worker2 May 5 01:03:04.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3519" for this suite. 05/05/23 01:03:04.857 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 << End Captured GinkgoWriter Output May 5 01:03:03.703: failed to create RuntimeClass resource: runtimeclasses.node.k8s.io "test-handler" already exists In [BeforeEach] at: test/e2e/scheduling/predicates.go:248 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] test/e2e/scheduling/predicates.go:122 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:03:04.863 May 5 01:03:04.863: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 05/05/23 01:03:04.865 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:03:04.876 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:03:04.88 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 May 5 01:03:04.883: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 01:03:04.891: INFO: Waiting for terminating namespaces to be deleted... May 5 01:03:04.894: INFO: Logging pods the apiserver thinks is on node v125-worker before test May 5 01:03:04.900: INFO: create-loop-devs-9mv4v from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:03:04.900: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:04.900: INFO: kindnet-m8hwr from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:04.900: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:04.900: INFO: kube-proxy-kzswj from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:04.900: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:04.900: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test May 5 01:03:04.906: INFO: create-loop-devs-cfx6b from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:03:04.906: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:04.906: INFO: kindnet-4spxt from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:04.906: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:04.906: INFO: kube-proxy-df52h from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:04.906: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:04.906: INFO: pod1 from sched-pred-1044 started at 2023-05-05 01:02:55 +0000 UTC (1 container statuses recorded) May 5 01:03:04.906: INFO: Container agnhost ready: true, restart count 0 May 5 01:03:04.906: INFO: pod2 from sched-pred-1044 started at 2023-05-05 01:02:57 +0000 UTC (1 container statuses recorded) May 5 01:03:04.906: INFO: Container agnhost ready: true, restart count 0 May 5 01:03:04.906: INFO: pod3 from sched-pred-1044 started at 2023-05-05 01:02:59 +0000 UTC (1 container statuses recorded) May 5 01:03:04.906: INFO: Container agnhost ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] test/e2e/scheduling/predicates.go:122 May 5 01:03:04.923: INFO: Pod create-loop-devs-9mv4v requesting local ephemeral resource =0 on Node v125-worker May 5 01:03:04.923: INFO: Pod create-loop-devs-cfx6b requesting local ephemeral resource =0 on Node v125-worker2 May 5 01:03:04.923: INFO: Pod kindnet-4spxt requesting local ephemeral resource =0 on Node v125-worker2 May 5 01:03:04.923: INFO: Pod kindnet-m8hwr requesting local ephemeral resource =0 on Node v125-worker May 5 01:03:04.923: INFO: Pod kube-proxy-df52h requesting local ephemeral resource =0 on Node v125-worker2 May 5 01:03:04.923: INFO: Pod kube-proxy-kzswj requesting local ephemeral resource =0 on Node v125-worker May 5 01:03:04.923: INFO: Pod pod1 requesting local ephemeral resource =0 on Node v125-worker2 May 5 01:03:04.923: INFO: Pod pod2 requesting local ephemeral resource =0 on Node v125-worker2 May 5 01:03:04.923: INFO: Pod pod3 requesting local ephemeral resource =0 on Node v125-worker2 May 5 01:03:04.923: INFO: Using pod capacity: 47055905587 May 5 01:03:04.923: INFO: Node: v125-worker has local ephemeral resource allocatable: 470559055872 May 5 01:03:04.923: INFO: Node: v125-worker2 has local ephemeral resource allocatable: 470559055872 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one 05/05/23 01:03:04.923 May 5 01:03:05.015: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.175c19ab75bcac00], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-0 to v125-worker2] 05/05/23 01:03:20.077 STEP: Considering event: Type = [Normal], Name = [overcommit-0.175c19ababe43d2a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.077 STEP: Considering event: Type = [Normal], Name = [overcommit-0.175c19abacabb4b5], Reason = [Created], Message = [Created container overcommit-0] 05/05/23 01:03:20.077 STEP: Considering event: Type = [Normal], Name = [overcommit-0.175c19abbc20673f], Reason = [Started], Message = [Started container overcommit-0] 05/05/23 01:03:20.077 STEP: Considering event: Type = [Normal], Name = [overcommit-1.175c19ab7605780c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-1 to v125-worker2] 05/05/23 01:03:20.077 STEP: Considering event: Type = [Normal], Name = [overcommit-1.175c19abad90d38e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.077 STEP: Considering event: Type = [Normal], Name = [overcommit-1.175c19abae352942], Reason = [Created], Message = [Created container overcommit-1] 05/05/23 01:03:20.077 STEP: Considering event: Type = [Normal], Name = [overcommit-1.175c19abbcc9f4f5], Reason = [Started], Message = [Started container overcommit-1] 05/05/23 01:03:20.078 STEP: Considering event: Type = [Normal], Name = [overcommit-10.175c19ab789b2156], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-10 to v125-worker2] 05/05/23 01:03:20.078 STEP: Considering event: Type = [Normal], Name = [overcommit-10.175c19abd0e2abb1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.078 STEP: Considering event: Type = [Normal], Name = [overcommit-10.175c19abd192807b], Reason = [Created], Message = [Created container overcommit-10] 05/05/23 01:03:20.078 STEP: Considering event: Type = [Normal], Name = [overcommit-10.175c19abdc8e2a5e], Reason = [Started], Message = [Started container overcommit-10] 05/05/23 01:03:20.078 STEP: Considering event: Type = [Normal], Name = [overcommit-11.175c19ab78d9a402], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-11 to v125-worker2] 05/05/23 01:03:20.078 STEP: Considering event: Type = [Normal], Name = [overcommit-11.175c19abc4355130], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.078 STEP: Considering event: Type = [Normal], Name = [overcommit-11.175c19abc513d24d], Reason = [Created], Message = [Created container overcommit-11] 05/05/23 01:03:20.078 STEP: Considering event: Type = [Normal], Name = [overcommit-11.175c19abd3f96aff], Reason = [Started], Message = [Started container overcommit-11] 05/05/23 01:03:20.078 STEP: Considering event: Type = [Normal], Name = [overcommit-12.175c19ab7924ea6d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-12 to v125-worker2] 05/05/23 01:03:20.079 STEP: Considering event: Type = [Normal], Name = [overcommit-12.175c19ac16a2c1a2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.079 STEP: Considering event: Type = [Normal], Name = [overcommit-12.175c19ac1746da2a], Reason = [Created], Message = [Created container overcommit-12] 05/05/23 01:03:20.079 STEP: Considering event: Type = [Normal], Name = [overcommit-12.175c19ac24a63df5], Reason = [Started], Message = [Started container overcommit-12] 05/05/23 01:03:20.079 STEP: Considering event: Type = [Normal], Name = [overcommit-13.175c19ab795aabfd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-13 to v125-worker] 05/05/23 01:03:20.079 STEP: Considering event: Type = [Normal], Name = [overcommit-13.175c19ac2d454f49], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.079 STEP: Considering event: Type = [Normal], Name = [overcommit-13.175c19ac2ded7001], Reason = [Created], Message = [Created container overcommit-13] 05/05/23 01:03:20.079 STEP: Considering event: Type = [Normal], Name = [overcommit-13.175c19ac3ae715d0], Reason = [Started], Message = [Started container overcommit-13] 05/05/23 01:03:20.079 STEP: Considering event: Type = [Normal], Name = [overcommit-14.175c19ab79a36441], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-14 to v125-worker] 05/05/23 01:03:20.079 STEP: Considering event: Type = [Warning], Name = [overcommit-14.175c19ac05dcec73], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "kube-api-access-nh6gs" : failed to sync configmap cache: timed out waiting for the condition] 05/05/23 01:03:20.08 STEP: Considering event: Type = [Normal], Name = [overcommit-14.175c19ac4200a087], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.08 STEP: Considering event: Type = [Normal], Name = [overcommit-14.175c19ac42bbee35], Reason = [Created], Message = [Created container overcommit-14] 05/05/23 01:03:20.08 STEP: Considering event: Type = [Normal], Name = [overcommit-14.175c19ac4e5ad41e], Reason = [Started], Message = [Started container overcommit-14] 05/05/23 01:03:20.08 STEP: Considering event: Type = [Normal], Name = [overcommit-15.175c19ab79d22bd1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-15 to v125-worker] 05/05/23 01:03:20.08 STEP: Considering event: Type = [Normal], Name = [overcommit-15.175c19ac2eadc676], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.08 STEP: Considering event: Type = [Normal], Name = [overcommit-15.175c19ac2f56329d], Reason = [Created], Message = [Created container overcommit-15] 05/05/23 01:03:20.08 STEP: Considering event: Type = [Normal], Name = [overcommit-15.175c19ac3dd14a6a], Reason = [Started], Message = [Started container overcommit-15] 05/05/23 01:03:20.08 STEP: Considering event: Type = [Normal], Name = [overcommit-16.175c19ab7a09a921], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-16 to v125-worker] 05/05/23 01:03:20.08 STEP: Considering event: Type = [Warning], Name = [overcommit-16.175c19ac11c35548], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "kube-api-access-5784r" : failed to sync configmap cache: timed out waiting for the condition] 05/05/23 01:03:20.08 STEP: Considering event: Type = [Normal], Name = [overcommit-16.175c19ac41912023], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-16.175c19ac42194822], Reason = [Created], Message = [Created container overcommit-16] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-16.175c19ac4d664eb6], Reason = [Started], Message = [Started container overcommit-16] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-17.175c19ab7a417f39], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-17 to v125-worker2] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-17.175c19abf1a8da94], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-17.175c19abf24c86ac], Reason = [Created], Message = [Created container overcommit-17] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-17.175c19abfb57f400], Reason = [Started], Message = [Started container overcommit-17] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-18.175c19ab7ad15f50], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-18 to v125-worker2] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-18.175c19ac0d9f29d1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-18.175c19ac0e30ebb8], Reason = [Created], Message = [Created container overcommit-18] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-18.175c19ac19bdc728], Reason = [Started], Message = [Started container overcommit-18] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-19.175c19ab7b13e3a2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-19 to v125-worker2] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Normal], Name = [overcommit-19.175c19abeddc2b01], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Normal], Name = [overcommit-19.175c19abee936e8e], Reason = [Created], Message = [Created container overcommit-19] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Normal], Name = [overcommit-19.175c19abf7d86903], Reason = [Started], Message = [Started container overcommit-19] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Normal], Name = [overcommit-2.175c19ab7648c528], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-2 to v125-worker] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Normal], Name = [overcommit-2.175c19ac37c34c07], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Normal], Name = [overcommit-2.175c19ac3896a754], Reason = [Created], Message = [Created container overcommit-2] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Normal], Name = [overcommit-2.175c19ac441286b1], Reason = [Started], Message = [Started container overcommit-2] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Normal], Name = [overcommit-3.175c19ab7699c479], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-3 to v125-worker] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Warning], Name = [overcommit-3.175c19abee1b9b21], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "kube-api-access-mzc8d" : failed to sync configmap cache: timed out waiting for the condition] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Normal], Name = [overcommit-3.175c19ac39981a62], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Normal], Name = [overcommit-3.175c19ac3a558f5a], Reason = [Created], Message = [Created container overcommit-3] 05/05/23 01:03:20.083 STEP: Considering event: Type = [Normal], Name = [overcommit-3.175c19ac46e23287], Reason = [Started], Message = [Started container overcommit-3] 05/05/23 01:03:20.083 STEP: Considering event: Type = [Normal], Name = [overcommit-4.175c19ab76ea0738], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-4 to v125-worker] 05/05/23 01:03:20.083 STEP: Considering event: Type = [Warning], Name = [overcommit-4.175c19abf9eb99e1], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "kube-api-access-5vbtg" : failed to sync configmap cache: timed out waiting for the condition] 05/05/23 01:03:20.083 STEP: Considering event: Type = [Normal], Name = [overcommit-4.175c19ac2e1438f2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.083 STEP: Considering event: Type = [Normal], Name = [overcommit-4.175c19ac2ec5c1c9], Reason = [Created], Message = [Created container overcommit-4] 05/05/23 01:03:20.083 STEP: Considering event: Type = [Normal], Name = [overcommit-4.175c19ac3dae433f], Reason = [Started], Message = [Started container overcommit-4] 05/05/23 01:03:20.083 STEP: Considering event: Type = [Normal], Name = [overcommit-5.175c19ab77373a3b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-5 to v125-worker] 05/05/23 01:03:20.083 STEP: Considering event: Type = [Normal], Name = [overcommit-5.175c19ac2e4b6ac4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.083 STEP: Considering event: Type = [Normal], Name = [overcommit-5.175c19ac2f0d1d10], Reason = [Created], Message = [Created container overcommit-5] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Normal], Name = [overcommit-5.175c19ac3dd63902], Reason = [Started], Message = [Started container overcommit-5] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Normal], Name = [overcommit-6.175c19ab77a3f733], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-6 to v125-worker] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Warning], Name = [overcommit-6.175c19abd66e6445], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "kube-api-access-j9nq7" : failed to sync configmap cache: timed out waiting for the condition] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Normal], Name = [overcommit-6.175c19ac2d0a59e9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Normal], Name = [overcommit-6.175c19ac2db815cc], Reason = [Created], Message = [Created container overcommit-6] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Normal], Name = [overcommit-6.175c19ac3ac08c80], Reason = [Started], Message = [Started container overcommit-6] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Normal], Name = [overcommit-7.175c19ab77e0c33b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-7 to v125-worker2] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Normal], Name = [overcommit-7.175c19abd4bb7130], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Normal], Name = [overcommit-7.175c19abd55a76ea], Reason = [Created], Message = [Created container overcommit-7] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Normal], Name = [overcommit-7.175c19abe3e79f03], Reason = [Started], Message = [Started container overcommit-7] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Normal], Name = [overcommit-8.175c19ab7821cff5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-8 to v125-worker2] 05/05/23 01:03:20.085 STEP: Considering event: Type = [Normal], Name = [overcommit-8.175c19ac0c2e7e1f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.085 STEP: Considering event: Type = [Normal], Name = [overcommit-8.175c19ac0cdb7425], Reason = [Created], Message = [Created container overcommit-8] 05/05/23 01:03:20.085 STEP: Considering event: Type = [Normal], Name = [overcommit-8.175c19ac18fc2749], Reason = [Started], Message = [Started container overcommit-8] 05/05/23 01:03:20.085 STEP: Considering event: Type = [Normal], Name = [overcommit-9.175c19ab78640afb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-9 to v125-worker] 05/05/23 01:03:20.085 STEP: Considering event: Type = [Warning], Name = [overcommit-9.175c19abe2284228], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "kube-api-access-lf8bq" : failed to sync configmap cache: timed out waiting for the condition] 05/05/23 01:03:20.085 STEP: Considering event: Type = [Normal], Name = [overcommit-9.175c19ac2dcf1578], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.085 STEP: Considering event: Type = [Normal], Name = [overcommit-9.175c19ac2ea88026], Reason = [Created], Message = [Created container overcommit-9] 05/05/23 01:03:20.085 STEP: Considering event: Type = [Normal], Name = [overcommit-9.175c19ac3dd73011], Reason = [Started], Message = [Started container overcommit-9] 05/05/23 01:03:20.085 STEP: Considering event: Type = [Warning], Name = [additional-pod.175c19aefcb377ce], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient ephemeral-storage. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.] 05/05/23 01:03:20.088 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 May 5 01:03:21.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4975" for this suite. 05/05/23 01:03:21.101 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","completed":2,"skipped":327,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for"]} ------------------------------ • [SLOW TEST] [16.243 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] test/e2e/scheduling/predicates.go:122 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:03:04.863 May 5 01:03:04.863: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 05/05/23 01:03:04.865 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:03:04.876 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:03:04.88 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 May 5 01:03:04.883: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 01:03:04.891: INFO: Waiting for terminating namespaces to be deleted... May 5 01:03:04.894: INFO: Logging pods the apiserver thinks is on node v125-worker before test May 5 01:03:04.900: INFO: create-loop-devs-9mv4v from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:03:04.900: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:04.900: INFO: kindnet-m8hwr from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:04.900: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:04.900: INFO: kube-proxy-kzswj from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:04.900: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:04.900: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test May 5 01:03:04.906: INFO: create-loop-devs-cfx6b from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:03:04.906: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:04.906: INFO: kindnet-4spxt from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:04.906: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:04.906: INFO: kube-proxy-df52h from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:04.906: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:04.906: INFO: pod1 from sched-pred-1044 started at 2023-05-05 01:02:55 +0000 UTC (1 container statuses recorded) May 5 01:03:04.906: INFO: Container agnhost ready: true, restart count 0 May 5 01:03:04.906: INFO: pod2 from sched-pred-1044 started at 2023-05-05 01:02:57 +0000 UTC (1 container statuses recorded) May 5 01:03:04.906: INFO: Container agnhost ready: true, restart count 0 May 5 01:03:04.906: INFO: pod3 from sched-pred-1044 started at 2023-05-05 01:02:59 +0000 UTC (1 container statuses recorded) May 5 01:03:04.906: INFO: Container agnhost ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] test/e2e/scheduling/predicates.go:122 May 5 01:03:04.923: INFO: Pod create-loop-devs-9mv4v requesting local ephemeral resource =0 on Node v125-worker May 5 01:03:04.923: INFO: Pod create-loop-devs-cfx6b requesting local ephemeral resource =0 on Node v125-worker2 May 5 01:03:04.923: INFO: Pod kindnet-4spxt requesting local ephemeral resource =0 on Node v125-worker2 May 5 01:03:04.923: INFO: Pod kindnet-m8hwr requesting local ephemeral resource =0 on Node v125-worker May 5 01:03:04.923: INFO: Pod kube-proxy-df52h requesting local ephemeral resource =0 on Node v125-worker2 May 5 01:03:04.923: INFO: Pod kube-proxy-kzswj requesting local ephemeral resource =0 on Node v125-worker May 5 01:03:04.923: INFO: Pod pod1 requesting local ephemeral resource =0 on Node v125-worker2 May 5 01:03:04.923: INFO: Pod pod2 requesting local ephemeral resource =0 on Node v125-worker2 May 5 01:03:04.923: INFO: Pod pod3 requesting local ephemeral resource =0 on Node v125-worker2 May 5 01:03:04.923: INFO: Using pod capacity: 47055905587 May 5 01:03:04.923: INFO: Node: v125-worker has local ephemeral resource allocatable: 470559055872 May 5 01:03:04.923: INFO: Node: v125-worker2 has local ephemeral resource allocatable: 470559055872 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one 05/05/23 01:03:04.923 May 5 01:03:05.015: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.175c19ab75bcac00], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-0 to v125-worker2] 05/05/23 01:03:20.077 STEP: Considering event: Type = [Normal], Name = [overcommit-0.175c19ababe43d2a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.077 STEP: Considering event: Type = [Normal], Name = [overcommit-0.175c19abacabb4b5], Reason = [Created], Message = [Created container overcommit-0] 05/05/23 01:03:20.077 STEP: Considering event: Type = [Normal], Name = [overcommit-0.175c19abbc20673f], Reason = [Started], Message = [Started container overcommit-0] 05/05/23 01:03:20.077 STEP: Considering event: Type = [Normal], Name = [overcommit-1.175c19ab7605780c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-1 to v125-worker2] 05/05/23 01:03:20.077 STEP: Considering event: Type = [Normal], Name = [overcommit-1.175c19abad90d38e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.077 STEP: Considering event: Type = [Normal], Name = [overcommit-1.175c19abae352942], Reason = [Created], Message = [Created container overcommit-1] 05/05/23 01:03:20.077 STEP: Considering event: Type = [Normal], Name = [overcommit-1.175c19abbcc9f4f5], Reason = [Started], Message = [Started container overcommit-1] 05/05/23 01:03:20.078 STEP: Considering event: Type = [Normal], Name = [overcommit-10.175c19ab789b2156], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-10 to v125-worker2] 05/05/23 01:03:20.078 STEP: Considering event: Type = [Normal], Name = [overcommit-10.175c19abd0e2abb1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.078 STEP: Considering event: Type = [Normal], Name = [overcommit-10.175c19abd192807b], Reason = [Created], Message = [Created container overcommit-10] 05/05/23 01:03:20.078 STEP: Considering event: Type = [Normal], Name = [overcommit-10.175c19abdc8e2a5e], Reason = [Started], Message = [Started container overcommit-10] 05/05/23 01:03:20.078 STEP: Considering event: Type = [Normal], Name = [overcommit-11.175c19ab78d9a402], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-11 to v125-worker2] 05/05/23 01:03:20.078 STEP: Considering event: Type = [Normal], Name = [overcommit-11.175c19abc4355130], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.078 STEP: Considering event: Type = [Normal], Name = [overcommit-11.175c19abc513d24d], Reason = [Created], Message = [Created container overcommit-11] 05/05/23 01:03:20.078 STEP: Considering event: Type = [Normal], Name = [overcommit-11.175c19abd3f96aff], Reason = [Started], Message = [Started container overcommit-11] 05/05/23 01:03:20.078 STEP: Considering event: Type = [Normal], Name = [overcommit-12.175c19ab7924ea6d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-12 to v125-worker2] 05/05/23 01:03:20.079 STEP: Considering event: Type = [Normal], Name = [overcommit-12.175c19ac16a2c1a2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.079 STEP: Considering event: Type = [Normal], Name = [overcommit-12.175c19ac1746da2a], Reason = [Created], Message = [Created container overcommit-12] 05/05/23 01:03:20.079 STEP: Considering event: Type = [Normal], Name = [overcommit-12.175c19ac24a63df5], Reason = [Started], Message = [Started container overcommit-12] 05/05/23 01:03:20.079 STEP: Considering event: Type = [Normal], Name = [overcommit-13.175c19ab795aabfd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-13 to v125-worker] 05/05/23 01:03:20.079 STEP: Considering event: Type = [Normal], Name = [overcommit-13.175c19ac2d454f49], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.079 STEP: Considering event: Type = [Normal], Name = [overcommit-13.175c19ac2ded7001], Reason = [Created], Message = [Created container overcommit-13] 05/05/23 01:03:20.079 STEP: Considering event: Type = [Normal], Name = [overcommit-13.175c19ac3ae715d0], Reason = [Started], Message = [Started container overcommit-13] 05/05/23 01:03:20.079 STEP: Considering event: Type = [Normal], Name = [overcommit-14.175c19ab79a36441], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-14 to v125-worker] 05/05/23 01:03:20.079 STEP: Considering event: Type = [Warning], Name = [overcommit-14.175c19ac05dcec73], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "kube-api-access-nh6gs" : failed to sync configmap cache: timed out waiting for the condition] 05/05/23 01:03:20.08 STEP: Considering event: Type = [Normal], Name = [overcommit-14.175c19ac4200a087], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.08 STEP: Considering event: Type = [Normal], Name = [overcommit-14.175c19ac42bbee35], Reason = [Created], Message = [Created container overcommit-14] 05/05/23 01:03:20.08 STEP: Considering event: Type = [Normal], Name = [overcommit-14.175c19ac4e5ad41e], Reason = [Started], Message = [Started container overcommit-14] 05/05/23 01:03:20.08 STEP: Considering event: Type = [Normal], Name = [overcommit-15.175c19ab79d22bd1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-15 to v125-worker] 05/05/23 01:03:20.08 STEP: Considering event: Type = [Normal], Name = [overcommit-15.175c19ac2eadc676], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.08 STEP: Considering event: Type = [Normal], Name = [overcommit-15.175c19ac2f56329d], Reason = [Created], Message = [Created container overcommit-15] 05/05/23 01:03:20.08 STEP: Considering event: Type = [Normal], Name = [overcommit-15.175c19ac3dd14a6a], Reason = [Started], Message = [Started container overcommit-15] 05/05/23 01:03:20.08 STEP: Considering event: Type = [Normal], Name = [overcommit-16.175c19ab7a09a921], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-16 to v125-worker] 05/05/23 01:03:20.08 STEP: Considering event: Type = [Warning], Name = [overcommit-16.175c19ac11c35548], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "kube-api-access-5784r" : failed to sync configmap cache: timed out waiting for the condition] 05/05/23 01:03:20.08 STEP: Considering event: Type = [Normal], Name = [overcommit-16.175c19ac41912023], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-16.175c19ac42194822], Reason = [Created], Message = [Created container overcommit-16] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-16.175c19ac4d664eb6], Reason = [Started], Message = [Started container overcommit-16] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-17.175c19ab7a417f39], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-17 to v125-worker2] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-17.175c19abf1a8da94], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-17.175c19abf24c86ac], Reason = [Created], Message = [Created container overcommit-17] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-17.175c19abfb57f400], Reason = [Started], Message = [Started container overcommit-17] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-18.175c19ab7ad15f50], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-18 to v125-worker2] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-18.175c19ac0d9f29d1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-18.175c19ac0e30ebb8], Reason = [Created], Message = [Created container overcommit-18] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-18.175c19ac19bdc728], Reason = [Started], Message = [Started container overcommit-18] 05/05/23 01:03:20.081 STEP: Considering event: Type = [Normal], Name = [overcommit-19.175c19ab7b13e3a2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-19 to v125-worker2] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Normal], Name = [overcommit-19.175c19abeddc2b01], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Normal], Name = [overcommit-19.175c19abee936e8e], Reason = [Created], Message = [Created container overcommit-19] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Normal], Name = [overcommit-19.175c19abf7d86903], Reason = [Started], Message = [Started container overcommit-19] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Normal], Name = [overcommit-2.175c19ab7648c528], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-2 to v125-worker] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Normal], Name = [overcommit-2.175c19ac37c34c07], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Normal], Name = [overcommit-2.175c19ac3896a754], Reason = [Created], Message = [Created container overcommit-2] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Normal], Name = [overcommit-2.175c19ac441286b1], Reason = [Started], Message = [Started container overcommit-2] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Normal], Name = [overcommit-3.175c19ab7699c479], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-3 to v125-worker] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Warning], Name = [overcommit-3.175c19abee1b9b21], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "kube-api-access-mzc8d" : failed to sync configmap cache: timed out waiting for the condition] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Normal], Name = [overcommit-3.175c19ac39981a62], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.082 STEP: Considering event: Type = [Normal], Name = [overcommit-3.175c19ac3a558f5a], Reason = [Created], Message = [Created container overcommit-3] 05/05/23 01:03:20.083 STEP: Considering event: Type = [Normal], Name = [overcommit-3.175c19ac46e23287], Reason = [Started], Message = [Started container overcommit-3] 05/05/23 01:03:20.083 STEP: Considering event: Type = [Normal], Name = [overcommit-4.175c19ab76ea0738], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-4 to v125-worker] 05/05/23 01:03:20.083 STEP: Considering event: Type = [Warning], Name = [overcommit-4.175c19abf9eb99e1], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "kube-api-access-5vbtg" : failed to sync configmap cache: timed out waiting for the condition] 05/05/23 01:03:20.083 STEP: Considering event: Type = [Normal], Name = [overcommit-4.175c19ac2e1438f2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.083 STEP: Considering event: Type = [Normal], Name = [overcommit-4.175c19ac2ec5c1c9], Reason = [Created], Message = [Created container overcommit-4] 05/05/23 01:03:20.083 STEP: Considering event: Type = [Normal], Name = [overcommit-4.175c19ac3dae433f], Reason = [Started], Message = [Started container overcommit-4] 05/05/23 01:03:20.083 STEP: Considering event: Type = [Normal], Name = [overcommit-5.175c19ab77373a3b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-5 to v125-worker] 05/05/23 01:03:20.083 STEP: Considering event: Type = [Normal], Name = [overcommit-5.175c19ac2e4b6ac4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.083 STEP: Considering event: Type = [Normal], Name = [overcommit-5.175c19ac2f0d1d10], Reason = [Created], Message = [Created container overcommit-5] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Normal], Name = [overcommit-5.175c19ac3dd63902], Reason = [Started], Message = [Started container overcommit-5] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Normal], Name = [overcommit-6.175c19ab77a3f733], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-6 to v125-worker] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Warning], Name = [overcommit-6.175c19abd66e6445], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "kube-api-access-j9nq7" : failed to sync configmap cache: timed out waiting for the condition] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Normal], Name = [overcommit-6.175c19ac2d0a59e9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Normal], Name = [overcommit-6.175c19ac2db815cc], Reason = [Created], Message = [Created container overcommit-6] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Normal], Name = [overcommit-6.175c19ac3ac08c80], Reason = [Started], Message = [Started container overcommit-6] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Normal], Name = [overcommit-7.175c19ab77e0c33b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-7 to v125-worker2] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Normal], Name = [overcommit-7.175c19abd4bb7130], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Normal], Name = [overcommit-7.175c19abd55a76ea], Reason = [Created], Message = [Created container overcommit-7] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Normal], Name = [overcommit-7.175c19abe3e79f03], Reason = [Started], Message = [Started container overcommit-7] 05/05/23 01:03:20.084 STEP: Considering event: Type = [Normal], Name = [overcommit-8.175c19ab7821cff5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-8 to v125-worker2] 05/05/23 01:03:20.085 STEP: Considering event: Type = [Normal], Name = [overcommit-8.175c19ac0c2e7e1f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.085 STEP: Considering event: Type = [Normal], Name = [overcommit-8.175c19ac0cdb7425], Reason = [Created], Message = [Created container overcommit-8] 05/05/23 01:03:20.085 STEP: Considering event: Type = [Normal], Name = [overcommit-8.175c19ac18fc2749], Reason = [Started], Message = [Started container overcommit-8] 05/05/23 01:03:20.085 STEP: Considering event: Type = [Normal], Name = [overcommit-9.175c19ab78640afb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4975/overcommit-9 to v125-worker] 05/05/23 01:03:20.085 STEP: Considering event: Type = [Warning], Name = [overcommit-9.175c19abe2284228], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "kube-api-access-lf8bq" : failed to sync configmap cache: timed out waiting for the condition] 05/05/23 01:03:20.085 STEP: Considering event: Type = [Normal], Name = [overcommit-9.175c19ac2dcf1578], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:20.085 STEP: Considering event: Type = [Normal], Name = [overcommit-9.175c19ac2ea88026], Reason = [Created], Message = [Created container overcommit-9] 05/05/23 01:03:20.085 STEP: Considering event: Type = [Normal], Name = [overcommit-9.175c19ac3dd73011], Reason = [Started], Message = [Started container overcommit-9] 05/05/23 01:03:20.085 STEP: Considering event: Type = [Warning], Name = [additional-pod.175c19aefcb377ce], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient ephemeral-storage. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.] 05/05/23 01:03:20.088 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 May 5 01:03:21.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4975" for this suite. 05/05/23 01:03:21.101 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching test/e2e/scheduling/predicates.go:534 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:03:21.123 May 5 01:03:21.123: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 05/05/23 01:03:21.125 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:03:21.136 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:03:21.139 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 May 5 01:03:21.143: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 01:03:21.151: INFO: Waiting for terminating namespaces to be deleted... May 5 01:03:21.154: INFO: Logging pods the apiserver thinks is on node v125-worker before test May 5 01:03:21.162: INFO: create-loop-devs-9mv4v from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:03:21.162: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:21.162: INFO: kindnet-m8hwr from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:21.162: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:21.162: INFO: kube-proxy-kzswj from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:21.162: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:21.162: INFO: overcommit-13 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.162: INFO: Container overcommit-13 ready: true, restart count 0 May 5 01:03:21.162: INFO: overcommit-14 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:21.162: INFO: Container overcommit-14 ready: true, restart count 0 May 5 01:03:21.162: INFO: overcommit-15 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:21.163: INFO: Container overcommit-15 ready: true, restart count 0 May 5 01:03:21.163: INFO: overcommit-16 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:21.163: INFO: Container overcommit-16 ready: true, restart count 0 May 5 01:03:21.163: INFO: overcommit-2 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.163: INFO: Container overcommit-2 ready: true, restart count 0 May 5 01:03:21.163: INFO: overcommit-3 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.163: INFO: Container overcommit-3 ready: true, restart count 0 May 5 01:03:21.163: INFO: overcommit-4 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.163: INFO: Container overcommit-4 ready: true, restart count 0 May 5 01:03:21.163: INFO: overcommit-5 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.163: INFO: Container overcommit-5 ready: true, restart count 0 May 5 01:03:21.163: INFO: overcommit-6 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.163: INFO: Container overcommit-6 ready: true, restart count 0 May 5 01:03:21.163: INFO: overcommit-9 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.163: INFO: Container overcommit-9 ready: true, restart count 0 May 5 01:03:21.163: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test May 5 01:03:21.177: INFO: create-loop-devs-cfx6b from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:21.177: INFO: kindnet-4spxt from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:21.177: INFO: kube-proxy-df52h from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:21.177: INFO: overcommit-0 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container overcommit-0 ready: true, restart count 0 May 5 01:03:21.177: INFO: overcommit-1 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container overcommit-1 ready: true, restart count 0 May 5 01:03:21.177: INFO: overcommit-10 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container overcommit-10 ready: true, restart count 0 May 5 01:03:21.177: INFO: overcommit-11 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container overcommit-11 ready: true, restart count 0 May 5 01:03:21.177: INFO: overcommit-12 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container overcommit-12 ready: true, restart count 0 May 5 01:03:21.177: INFO: overcommit-17 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container overcommit-17 ready: true, restart count 0 May 5 01:03:21.177: INFO: overcommit-18 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container overcommit-18 ready: true, restart count 0 May 5 01:03:21.177: INFO: overcommit-19 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container overcommit-19 ready: true, restart count 0 May 5 01:03:21.177: INFO: overcommit-7 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container overcommit-7 ready: true, restart count 0 May 5 01:03:21.177: INFO: overcommit-8 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container overcommit-8 ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching test/e2e/scheduling/predicates.go:534 STEP: Trying to launch a pod without a label to get a node which can launch it. 05/05/23 01:03:21.177 May 5 01:03:21.184: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-6281" to be "running" May 5 01:03:21.187: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.988475ms May 5 01:03:23.191: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.006953179s May 5 01:03:23.191: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:03:23.194 STEP: Trying to apply a random label on the found node. 05/05/23 01:03:23.203 STEP: verifying the node has the label kubernetes.io/e2e-799a6f21-b795-4a00-ba58-547cbe10b7b0 42 05/05/23 01:03:23.219 STEP: Trying to relaunch the pod, now with labels. 05/05/23 01:03:23.223 May 5 01:03:23.228: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-6281" to be "not pending" May 5 01:03:23.231: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 3.064275ms May 5 01:03:25.235: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 2.007285462s May 5 01:03:25.235: INFO: Pod "with-labels" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-799a6f21-b795-4a00-ba58-547cbe10b7b0 off the node v125-worker2 05/05/23 01:03:25.238 STEP: verifying the node doesn't have the label kubernetes.io/e2e-799a6f21-b795-4a00-ba58-547cbe10b7b0 05/05/23 01:03:25.251 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 May 5 01:03:25.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6281" for this suite. 05/05/23 01:03:25.259 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","completed":3,"skipped":564,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for"]} ------------------------------ • [4.140 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching test/e2e/scheduling/predicates.go:534 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:03:21.123 May 5 01:03:21.123: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 05/05/23 01:03:21.125 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:03:21.136 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:03:21.139 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 May 5 01:03:21.143: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 01:03:21.151: INFO: Waiting for terminating namespaces to be deleted... May 5 01:03:21.154: INFO: Logging pods the apiserver thinks is on node v125-worker before test May 5 01:03:21.162: INFO: create-loop-devs-9mv4v from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:03:21.162: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:21.162: INFO: kindnet-m8hwr from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:21.162: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:21.162: INFO: kube-proxy-kzswj from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:21.162: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:21.162: INFO: overcommit-13 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.162: INFO: Container overcommit-13 ready: true, restart count 0 May 5 01:03:21.162: INFO: overcommit-14 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:21.162: INFO: Container overcommit-14 ready: true, restart count 0 May 5 01:03:21.162: INFO: overcommit-15 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:21.163: INFO: Container overcommit-15 ready: true, restart count 0 May 5 01:03:21.163: INFO: overcommit-16 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:21.163: INFO: Container overcommit-16 ready: true, restart count 0 May 5 01:03:21.163: INFO: overcommit-2 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.163: INFO: Container overcommit-2 ready: true, restart count 0 May 5 01:03:21.163: INFO: overcommit-3 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.163: INFO: Container overcommit-3 ready: true, restart count 0 May 5 01:03:21.163: INFO: overcommit-4 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.163: INFO: Container overcommit-4 ready: true, restart count 0 May 5 01:03:21.163: INFO: overcommit-5 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.163: INFO: Container overcommit-5 ready: true, restart count 0 May 5 01:03:21.163: INFO: overcommit-6 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.163: INFO: Container overcommit-6 ready: true, restart count 0 May 5 01:03:21.163: INFO: overcommit-9 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.163: INFO: Container overcommit-9 ready: true, restart count 0 May 5 01:03:21.163: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test May 5 01:03:21.177: INFO: create-loop-devs-cfx6b from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:21.177: INFO: kindnet-4spxt from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:21.177: INFO: kube-proxy-df52h from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:21.177: INFO: overcommit-0 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container overcommit-0 ready: true, restart count 0 May 5 01:03:21.177: INFO: overcommit-1 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container overcommit-1 ready: true, restart count 0 May 5 01:03:21.177: INFO: overcommit-10 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container overcommit-10 ready: true, restart count 0 May 5 01:03:21.177: INFO: overcommit-11 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container overcommit-11 ready: true, restart count 0 May 5 01:03:21.177: INFO: overcommit-12 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container overcommit-12 ready: true, restart count 0 May 5 01:03:21.177: INFO: overcommit-17 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container overcommit-17 ready: true, restart count 0 May 5 01:03:21.177: INFO: overcommit-18 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container overcommit-18 ready: true, restart count 0 May 5 01:03:21.177: INFO: overcommit-19 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container overcommit-19 ready: true, restart count 0 May 5 01:03:21.177: INFO: overcommit-7 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container overcommit-7 ready: true, restart count 0 May 5 01:03:21.177: INFO: overcommit-8 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:21.177: INFO: Container overcommit-8 ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching test/e2e/scheduling/predicates.go:534 STEP: Trying to launch a pod without a label to get a node which can launch it. 05/05/23 01:03:21.177 May 5 01:03:21.184: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-6281" to be "running" May 5 01:03:21.187: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.988475ms May 5 01:03:23.191: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.006953179s May 5 01:03:23.191: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:03:23.194 STEP: Trying to apply a random label on the found node. 05/05/23 01:03:23.203 STEP: verifying the node has the label kubernetes.io/e2e-799a6f21-b795-4a00-ba58-547cbe10b7b0 42 05/05/23 01:03:23.219 STEP: Trying to relaunch the pod, now with labels. 05/05/23 01:03:23.223 May 5 01:03:23.228: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-6281" to be "not pending" May 5 01:03:23.231: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 3.064275ms May 5 01:03:25.235: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 2.007285462s May 5 01:03:25.235: INFO: Pod "with-labels" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-799a6f21-b795-4a00-ba58-547cbe10b7b0 off the node v125-worker2 05/05/23 01:03:25.238 STEP: verifying the node doesn't have the label kubernetes.io/e2e-799a6f21-b795-4a00-ba58-547cbe10b7b0 05/05/23 01:03:25.251 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 May 5 01:03:25.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6281" for this suite. 05/05/23 01:03:25.259 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes test/e2e/scheduling/predicates.go:743 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:03:25.266 May 5 01:03:25.267: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 05/05/23 01:03:25.268 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:03:25.278 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:03:25.281 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 May 5 01:03:25.284: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 01:03:25.292: INFO: Waiting for terminating namespaces to be deleted... May 5 01:03:25.295: INFO: Logging pods the apiserver thinks is on node v125-worker before test May 5 01:03:25.302: INFO: create-loop-devs-9mv4v from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:25.302: INFO: kindnet-m8hwr from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:25.302: INFO: kube-proxy-kzswj from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:25.302: INFO: overcommit-13 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container overcommit-13 ready: true, restart count 0 May 5 01:03:25.302: INFO: overcommit-14 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container overcommit-14 ready: true, restart count 0 May 5 01:03:25.302: INFO: overcommit-15 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container overcommit-15 ready: true, restart count 0 May 5 01:03:25.302: INFO: overcommit-16 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container overcommit-16 ready: true, restart count 0 May 5 01:03:25.302: INFO: overcommit-2 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container overcommit-2 ready: true, restart count 0 May 5 01:03:25.302: INFO: overcommit-3 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container overcommit-3 ready: true, restart count 0 May 5 01:03:25.302: INFO: overcommit-4 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container overcommit-4 ready: true, restart count 0 May 5 01:03:25.302: INFO: overcommit-5 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container overcommit-5 ready: true, restart count 0 May 5 01:03:25.302: INFO: overcommit-6 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container overcommit-6 ready: true, restart count 0 May 5 01:03:25.302: INFO: overcommit-9 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container overcommit-9 ready: true, restart count 0 May 5 01:03:25.302: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test May 5 01:03:25.312: INFO: create-loop-devs-cfx6b from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:25.312: INFO: kindnet-4spxt from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:25.312: INFO: kube-proxy-df52h from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:25.312: INFO: overcommit-0 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container overcommit-0 ready: true, restart count 0 May 5 01:03:25.312: INFO: overcommit-1 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container overcommit-1 ready: true, restart count 0 May 5 01:03:25.312: INFO: overcommit-10 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container overcommit-10 ready: true, restart count 0 May 5 01:03:25.312: INFO: overcommit-11 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container overcommit-11 ready: true, restart count 0 May 5 01:03:25.312: INFO: overcommit-12 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container overcommit-12 ready: true, restart count 0 May 5 01:03:25.312: INFO: overcommit-17 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container overcommit-17 ready: true, restart count 0 May 5 01:03:25.312: INFO: overcommit-18 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container overcommit-18 ready: true, restart count 0 May 5 01:03:25.312: INFO: overcommit-19 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container overcommit-19 ready: true, restart count 0 May 5 01:03:25.312: INFO: overcommit-7 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container overcommit-7 ready: true, restart count 0 May 5 01:03:25.312: INFO: overcommit-8 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container overcommit-8 ready: true, restart count 0 May 5 01:03:25.312: INFO: with-labels from sched-pred-6281 started at 2023-05-05 01:03:23 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container with-labels ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering test/e2e/scheduling/predicates.go:726 STEP: Trying to get 2 available nodes which can run pod 05/05/23 01:03:25.312 STEP: Trying to launch a pod without a label to get a node which can launch it. 05/05/23 01:03:25.313 May 5 01:03:25.320: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-1078" to be "running" May 5 01:03:25.323: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.75047ms May 5 01:03:27.327: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006754876s May 5 01:03:29.327: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.007264398s May 5 01:03:29.327: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:03:29.33 STEP: Trying to launch a pod without a label to get a node which can launch it. 05/05/23 01:03:29.339 May 5 01:03:29.344: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-1078" to be "running" May 5 01:03:29.347: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.224512ms May 5 01:03:31.355: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010757558s May 5 01:03:33.352: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007655464s May 5 01:03:35.351: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 6.007440192s May 5 01:03:35.352: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:03:35.355 STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. 05/05/23 01:03:35.363 [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes test/e2e/scheduling/predicates.go:743 [AfterEach] PodTopologySpread Filtering test/e2e/scheduling/predicates.go:737 STEP: removing the label kubernetes.io/e2e-pts-filter off the node v125-worker 05/05/23 01:03:37.403 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter 05/05/23 01:03:37.415 STEP: removing the label kubernetes.io/e2e-pts-filter off the node v125-worker2 05/05/23 01:03:37.419 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter 05/05/23 01:03:37.431 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 May 5 01:03:37.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1078" for this suite. 05/05/23 01:03:37.438 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","completed":4,"skipped":605,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for"]} ------------------------------ • [SLOW TEST] [12.175 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering test/e2e/scheduling/predicates.go:722 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes test/e2e/scheduling/predicates.go:743 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:03:25.266 May 5 01:03:25.267: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 05/05/23 01:03:25.268 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:03:25.278 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:03:25.281 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 May 5 01:03:25.284: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 01:03:25.292: INFO: Waiting for terminating namespaces to be deleted... May 5 01:03:25.295: INFO: Logging pods the apiserver thinks is on node v125-worker before test May 5 01:03:25.302: INFO: create-loop-devs-9mv4v from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:25.302: INFO: kindnet-m8hwr from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:25.302: INFO: kube-proxy-kzswj from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:25.302: INFO: overcommit-13 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container overcommit-13 ready: true, restart count 0 May 5 01:03:25.302: INFO: overcommit-14 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container overcommit-14 ready: true, restart count 0 May 5 01:03:25.302: INFO: overcommit-15 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container overcommit-15 ready: true, restart count 0 May 5 01:03:25.302: INFO: overcommit-16 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container overcommit-16 ready: true, restart count 0 May 5 01:03:25.302: INFO: overcommit-2 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container overcommit-2 ready: true, restart count 0 May 5 01:03:25.302: INFO: overcommit-3 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container overcommit-3 ready: true, restart count 0 May 5 01:03:25.302: INFO: overcommit-4 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container overcommit-4 ready: true, restart count 0 May 5 01:03:25.302: INFO: overcommit-5 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container overcommit-5 ready: true, restart count 0 May 5 01:03:25.302: INFO: overcommit-6 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container overcommit-6 ready: true, restart count 0 May 5 01:03:25.302: INFO: overcommit-9 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.302: INFO: Container overcommit-9 ready: true, restart count 0 May 5 01:03:25.302: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test May 5 01:03:25.312: INFO: create-loop-devs-cfx6b from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:25.312: INFO: kindnet-4spxt from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:25.312: INFO: kube-proxy-df52h from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:25.312: INFO: overcommit-0 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container overcommit-0 ready: true, restart count 0 May 5 01:03:25.312: INFO: overcommit-1 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container overcommit-1 ready: true, restart count 0 May 5 01:03:25.312: INFO: overcommit-10 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container overcommit-10 ready: true, restart count 0 May 5 01:03:25.312: INFO: overcommit-11 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container overcommit-11 ready: true, restart count 0 May 5 01:03:25.312: INFO: overcommit-12 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container overcommit-12 ready: true, restart count 0 May 5 01:03:25.312: INFO: overcommit-17 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container overcommit-17 ready: true, restart count 0 May 5 01:03:25.312: INFO: overcommit-18 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container overcommit-18 ready: true, restart count 0 May 5 01:03:25.312: INFO: overcommit-19 from sched-pred-4975 started at 2023-05-05 01:03:05 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container overcommit-19 ready: true, restart count 0 May 5 01:03:25.312: INFO: overcommit-7 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container overcommit-7 ready: true, restart count 0 May 5 01:03:25.312: INFO: overcommit-8 from sched-pred-4975 started at 2023-05-05 01:03:04 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container overcommit-8 ready: true, restart count 0 May 5 01:03:25.312: INFO: with-labels from sched-pred-6281 started at 2023-05-05 01:03:23 +0000 UTC (1 container statuses recorded) May 5 01:03:25.312: INFO: Container with-labels ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering test/e2e/scheduling/predicates.go:726 STEP: Trying to get 2 available nodes which can run pod 05/05/23 01:03:25.312 STEP: Trying to launch a pod without a label to get a node which can launch it. 05/05/23 01:03:25.313 May 5 01:03:25.320: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-1078" to be "running" May 5 01:03:25.323: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.75047ms May 5 01:03:27.327: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006754876s May 5 01:03:29.327: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.007264398s May 5 01:03:29.327: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:03:29.33 STEP: Trying to launch a pod without a label to get a node which can launch it. 05/05/23 01:03:29.339 May 5 01:03:29.344: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-1078" to be "running" May 5 01:03:29.347: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.224512ms May 5 01:03:31.355: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010757558s May 5 01:03:33.352: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007655464s May 5 01:03:35.351: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 6.007440192s May 5 01:03:35.352: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:03:35.355 STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. 05/05/23 01:03:35.363 [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes test/e2e/scheduling/predicates.go:743 [AfterEach] PodTopologySpread Filtering test/e2e/scheduling/predicates.go:737 STEP: removing the label kubernetes.io/e2e-pts-filter off the node v125-worker 05/05/23 01:03:37.403 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter 05/05/23 01:03:37.415 STEP: removing the label kubernetes.io/e2e-pts-filter off the node v125-worker2 05/05/23 01:03:37.419 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter 05/05/23 01:03:37.431 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 May 5 01:03:37.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1078" for this suite. 05/05/23 01:03:37.438 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching test/e2e/scheduling/predicates.go:625 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:03:37.507 May 5 01:03:37.507: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 05/05/23 01:03:37.509 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:03:37.518 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:03:37.521 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 May 5 01:03:37.525: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 01:03:37.533: INFO: Waiting for terminating namespaces to be deleted... May 5 01:03:37.536: INFO: Logging pods the apiserver thinks is on node v125-worker before test May 5 01:03:37.541: INFO: create-loop-devs-9mv4v from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:03:37.541: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:37.541: INFO: kindnet-m8hwr from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:37.541: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:37.541: INFO: kube-proxy-kzswj from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:37.541: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:37.541: INFO: rs-e2e-pts-filter-8htpx from sched-pred-1078 started at 2023-05-05 01:03:35 +0000 UTC (1 container statuses recorded) May 5 01:03:37.541: INFO: Container e2e-pts-filter ready: true, restart count 0 May 5 01:03:37.541: INFO: rs-e2e-pts-filter-prdh9 from sched-pred-1078 started at 2023-05-05 01:03:35 +0000 UTC (1 container statuses recorded) May 5 01:03:37.541: INFO: Container e2e-pts-filter ready: true, restart count 0 May 5 01:03:37.541: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test May 5 01:03:37.547: INFO: create-loop-devs-cfx6b from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:03:37.547: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:37.547: INFO: kindnet-4spxt from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:37.547: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:37.547: INFO: kube-proxy-df52h from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:37.547: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:37.547: INFO: rs-e2e-pts-filter-shqxc from sched-pred-1078 started at 2023-05-05 01:03:35 +0000 UTC (1 container statuses recorded) May 5 01:03:37.547: INFO: Container e2e-pts-filter ready: true, restart count 0 May 5 01:03:37.547: INFO: rs-e2e-pts-filter-vtcqh from sched-pred-1078 started at 2023-05-05 01:03:35 +0000 UTC (1 container statuses recorded) May 5 01:03:37.547: INFO: Container e2e-pts-filter ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching test/e2e/scheduling/predicates.go:625 STEP: Trying to launch a pod without a toleration to get a node which can launch it. 05/05/23 01:03:37.547 May 5 01:03:37.554: INFO: Waiting up to 1m0s for pod "without-toleration" in namespace "sched-pred-8515" to be "running" May 5 01:03:37.557: INFO: Pod "without-toleration": Phase="Pending", Reason="", readiness=false. Elapsed: 2.966194ms May 5 01:03:39.562: INFO: Pod "without-toleration": Phase="Running", Reason="", readiness=true. Elapsed: 2.008256404s May 5 01:03:39.562: INFO: Pod "without-toleration" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:03:39.565 STEP: Trying to apply a random taint on the found node. 05/05/23 01:03:39.574 STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-12c4340b-8189-4fcf-8de7-0fdc0f18e013=testing-taint-value:NoSchedule 05/05/23 01:03:39.589 STEP: Trying to apply a random label on the found node. 05/05/23 01:03:39.593 STEP: verifying the node has the label kubernetes.io/e2e-label-key-078d1a9a-12e3-47ea-a16b-784bb8f6d600 testing-label-value 05/05/23 01:03:39.605 STEP: Trying to relaunch the pod, still no tolerations. 05/05/23 01:03:39.608 STEP: Considering event: Type = [Normal], Name = [without-toleration.175c19b30e39783f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8515/without-toleration to v125-worker2] 05/05/23 01:03:39.612 STEP: Considering event: Type = [Normal], Name = [without-toleration.175c19b33199e708], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:39.612 STEP: Considering event: Type = [Normal], Name = [without-toleration.175c19b3326526a0], Reason = [Created], Message = [Created container without-toleration] 05/05/23 01:03:39.612 STEP: Considering event: Type = [Normal], Name = [without-toleration.175c19b3419ddc21], Reason = [Started], Message = [Started container without-toleration] 05/05/23 01:03:39.613 STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.175c19b3890b93b5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {kubernetes.io/e2e-taint-key-12c4340b-8189-4fcf-8de7-0fdc0f18e013: testing-taint-value}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.] 05/05/23 01:03:39.622 STEP: Removing taint off the node 05/05/23 01:03:40.623 STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.175c19b3890b93b5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {kubernetes.io/e2e-taint-key-12c4340b-8189-4fcf-8de7-0fdc0f18e013: testing-taint-value}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.] 05/05/23 01:03:40.627 STEP: Considering event: Type = [Normal], Name = [without-toleration.175c19b30e39783f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8515/without-toleration to v125-worker2] 05/05/23 01:03:40.627 STEP: Considering event: Type = [Normal], Name = [without-toleration.175c19b33199e708], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:40.627 STEP: Considering event: Type = [Normal], Name = [without-toleration.175c19b3326526a0], Reason = [Created], Message = [Created container without-toleration] 05/05/23 01:03:40.627 STEP: Considering event: Type = [Normal], Name = [without-toleration.175c19b3419ddc21], Reason = [Started], Message = [Started container without-toleration] 05/05/23 01:03:40.627 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-12c4340b-8189-4fcf-8de7-0fdc0f18e013=testing-taint-value:NoSchedule 05/05/23 01:03:40.644 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.175c19b3c6778112], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8515/still-no-tolerations to v125-worker2] 05/05/23 01:03:40.652 STEP: Considering event: Type = [Normal], Name = [without-toleration.175c19b3e0702b4d], Reason = [Killing], Message = [Stopping container without-toleration] 05/05/23 01:03:41.088 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.175c19b3ead9c4b3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:41.263 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.175c19b3eb8e0ae6], Reason = [Created], Message = [Created container still-no-tolerations] 05/05/23 01:03:41.274 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.175c19b3f8e229ac], Reason = [Started], Message = [Started container still-no-tolerations] 05/05/23 01:03:41.498 STEP: removing the label kubernetes.io/e2e-label-key-078d1a9a-12e3-47ea-a16b-784bb8f6d600 off the node v125-worker2 05/05/23 01:03:41.653 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-078d1a9a-12e3-47ea-a16b-784bb8f6d600 05/05/23 01:03:41.667 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-12c4340b-8189-4fcf-8de7-0fdc0f18e013=testing-taint-value:NoSchedule 05/05/23 01:03:41.673 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 May 5 01:03:41.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8515" for this suite. 05/05/23 01:03:41.681 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","completed":5,"skipped":1697,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for"]} ------------------------------ • [4.178 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching test/e2e/scheduling/predicates.go:625 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:03:37.507 May 5 01:03:37.507: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 05/05/23 01:03:37.509 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:03:37.518 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:03:37.521 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 May 5 01:03:37.525: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 01:03:37.533: INFO: Waiting for terminating namespaces to be deleted... May 5 01:03:37.536: INFO: Logging pods the apiserver thinks is on node v125-worker before test May 5 01:03:37.541: INFO: create-loop-devs-9mv4v from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:03:37.541: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:37.541: INFO: kindnet-m8hwr from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:37.541: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:37.541: INFO: kube-proxy-kzswj from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:37.541: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:37.541: INFO: rs-e2e-pts-filter-8htpx from sched-pred-1078 started at 2023-05-05 01:03:35 +0000 UTC (1 container statuses recorded) May 5 01:03:37.541: INFO: Container e2e-pts-filter ready: true, restart count 0 May 5 01:03:37.541: INFO: rs-e2e-pts-filter-prdh9 from sched-pred-1078 started at 2023-05-05 01:03:35 +0000 UTC (1 container statuses recorded) May 5 01:03:37.541: INFO: Container e2e-pts-filter ready: true, restart count 0 May 5 01:03:37.541: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test May 5 01:03:37.547: INFO: create-loop-devs-cfx6b from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:03:37.547: INFO: Container loopdev ready: true, restart count 0 May 5 01:03:37.547: INFO: kindnet-4spxt from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:37.547: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:03:37.547: INFO: kube-proxy-df52h from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:03:37.547: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:03:37.547: INFO: rs-e2e-pts-filter-shqxc from sched-pred-1078 started at 2023-05-05 01:03:35 +0000 UTC (1 container statuses recorded) May 5 01:03:37.547: INFO: Container e2e-pts-filter ready: true, restart count 0 May 5 01:03:37.547: INFO: rs-e2e-pts-filter-vtcqh from sched-pred-1078 started at 2023-05-05 01:03:35 +0000 UTC (1 container statuses recorded) May 5 01:03:37.547: INFO: Container e2e-pts-filter ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching test/e2e/scheduling/predicates.go:625 STEP: Trying to launch a pod without a toleration to get a node which can launch it. 05/05/23 01:03:37.547 May 5 01:03:37.554: INFO: Waiting up to 1m0s for pod "without-toleration" in namespace "sched-pred-8515" to be "running" May 5 01:03:37.557: INFO: Pod "without-toleration": Phase="Pending", Reason="", readiness=false. Elapsed: 2.966194ms May 5 01:03:39.562: INFO: Pod "without-toleration": Phase="Running", Reason="", readiness=true. Elapsed: 2.008256404s May 5 01:03:39.562: INFO: Pod "without-toleration" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:03:39.565 STEP: Trying to apply a random taint on the found node. 05/05/23 01:03:39.574 STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-12c4340b-8189-4fcf-8de7-0fdc0f18e013=testing-taint-value:NoSchedule 05/05/23 01:03:39.589 STEP: Trying to apply a random label on the found node. 05/05/23 01:03:39.593 STEP: verifying the node has the label kubernetes.io/e2e-label-key-078d1a9a-12e3-47ea-a16b-784bb8f6d600 testing-label-value 05/05/23 01:03:39.605 STEP: Trying to relaunch the pod, still no tolerations. 05/05/23 01:03:39.608 STEP: Considering event: Type = [Normal], Name = [without-toleration.175c19b30e39783f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8515/without-toleration to v125-worker2] 05/05/23 01:03:39.612 STEP: Considering event: Type = [Normal], Name = [without-toleration.175c19b33199e708], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:39.612 STEP: Considering event: Type = [Normal], Name = [without-toleration.175c19b3326526a0], Reason = [Created], Message = [Created container without-toleration] 05/05/23 01:03:39.612 STEP: Considering event: Type = [Normal], Name = [without-toleration.175c19b3419ddc21], Reason = [Started], Message = [Started container without-toleration] 05/05/23 01:03:39.613 STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.175c19b3890b93b5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {kubernetes.io/e2e-taint-key-12c4340b-8189-4fcf-8de7-0fdc0f18e013: testing-taint-value}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.] 05/05/23 01:03:39.622 STEP: Removing taint off the node 05/05/23 01:03:40.623 STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.175c19b3890b93b5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {kubernetes.io/e2e-taint-key-12c4340b-8189-4fcf-8de7-0fdc0f18e013: testing-taint-value}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.] 05/05/23 01:03:40.627 STEP: Considering event: Type = [Normal], Name = [without-toleration.175c19b30e39783f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8515/without-toleration to v125-worker2] 05/05/23 01:03:40.627 STEP: Considering event: Type = [Normal], Name = [without-toleration.175c19b33199e708], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:40.627 STEP: Considering event: Type = [Normal], Name = [without-toleration.175c19b3326526a0], Reason = [Created], Message = [Created container without-toleration] 05/05/23 01:03:40.627 STEP: Considering event: Type = [Normal], Name = [without-toleration.175c19b3419ddc21], Reason = [Started], Message = [Started container without-toleration] 05/05/23 01:03:40.627 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-12c4340b-8189-4fcf-8de7-0fdc0f18e013=testing-taint-value:NoSchedule 05/05/23 01:03:40.644 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.175c19b3c6778112], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8515/still-no-tolerations to v125-worker2] 05/05/23 01:03:40.652 STEP: Considering event: Type = [Normal], Name = [without-toleration.175c19b3e0702b4d], Reason = [Killing], Message = [Stopping container without-toleration] 05/05/23 01:03:41.088 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.175c19b3ead9c4b3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 05/05/23 01:03:41.263 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.175c19b3eb8e0ae6], Reason = [Created], Message = [Created container still-no-tolerations] 05/05/23 01:03:41.274 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.175c19b3f8e229ac], Reason = [Started], Message = [Started container still-no-tolerations] 05/05/23 01:03:41.498 STEP: removing the label kubernetes.io/e2e-label-key-078d1a9a-12e3-47ea-a16b-784bb8f6d600 off the node v125-worker2 05/05/23 01:03:41.653 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-078d1a9a-12e3-47ea-a16b-784bb8f6d600 05/05/23 01:03:41.667 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-12c4340b-8189-4fcf-8de7-0fdc0f18e013=testing-taint-value:NoSchedule 05/05/23 01:03:41.673 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 May 5 01:03:41.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8515" for this suite. 05/05/23 01:03:41.681 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms test/e2e/scheduling/priorities.go:124 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:03:41.716 May 5 01:03:41.716: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 05/05/23 01:03:41.717 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:03:41.728 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:03:41.731 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 May 5 01:03:41.735: INFO: Waiting up to 1m0s for all nodes to be ready May 5 01:04:41.762: INFO: Waiting for terminating namespaces to be deleted... May 5 01:04:41.766: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 5 01:04:41.779: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 5 01:04:41.779: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 5 01:04:41.785: INFO: ComputeCPUMemFraction for node: v125-worker May 5 01:04:41.785: INFO: Pod for on the node: create-loop-devs-9mv4v, Cpu: 100, Mem: 209715200 May 5 01:04:41.785: INFO: Pod for on the node: kindnet-m8hwr, Cpu: 100, Mem: 52428800 May 5 01:04:41.785: INFO: Pod for on the node: kube-proxy-kzswj, Cpu: 100, Mem: 209715200 May 5 01:04:41.785: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:04:41.785: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 May 5 01:04:41.785: INFO: ComputeCPUMemFraction for node: v125-worker2 May 5 01:04:41.785: INFO: Pod for on the node: create-loop-devs-cfx6b, Cpu: 100, Mem: 209715200 May 5 01:04:41.785: INFO: Pod for on the node: kindnet-4spxt, Cpu: 100, Mem: 52428800 May 5 01:04:41.785: INFO: Pod for on the node: kube-proxy-df52h, Cpu: 100, Mem: 209715200 May 5 01:04:41.785: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:04:41.785: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms test/e2e/scheduling/priorities.go:124 STEP: Trying to launch a pod with a label to get a node which can launch it. 05/05/23 01:04:41.785 May 5 01:04:41.795: INFO: Waiting up to 1m0s for pod "pod-with-label-security-s1" in namespace "sched-priority-2135" to be "running" May 5 01:04:41.798: INFO: Pod "pod-with-label-security-s1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.419857ms May 5 01:04:43.803: INFO: Pod "pod-with-label-security-s1": Phase="Running", Reason="", readiness=true. Elapsed: 2.007622928s May 5 01:04:43.803: INFO: Pod "pod-with-label-security-s1" satisfied condition "running" STEP: Verifying the node has a label kubernetes.io/hostname 05/05/23 01:04:43.806 May 5 01:04:43.817: INFO: ComputeCPUMemFraction for node: v125-worker May 5 01:04:43.817: INFO: Pod for on the node: create-loop-devs-9mv4v, Cpu: 100, Mem: 209715200 May 5 01:04:43.817: INFO: Pod for on the node: kindnet-m8hwr, Cpu: 100, Mem: 52428800 May 5 01:04:43.817: INFO: Pod for on the node: kube-proxy-kzswj, Cpu: 100, Mem: 209715200 May 5 01:04:43.817: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 5 01:04:43.817: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:04:43.817: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 May 5 01:04:43.817: INFO: ComputeCPUMemFraction for node: v125-worker2 May 5 01:04:43.817: INFO: Pod for on the node: create-loop-devs-cfx6b, Cpu: 100, Mem: 209715200 May 5 01:04:43.817: INFO: Pod for on the node: kindnet-4spxt, Cpu: 100, Mem: 52428800 May 5 01:04:43.817: INFO: Pod for on the node: kube-proxy-df52h, Cpu: 100, Mem: 209715200 May 5 01:04:43.817: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:04:43.817: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 May 5 01:04:43.823: INFO: Waiting for running... May 5 01:04:43.823: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. 05/05/23 01:04:48.883 May 5 01:04:48.883: INFO: ComputeCPUMemFraction for node: v125-worker May 5 01:04:48.883: INFO: Pod for on the node: create-loop-devs-9mv4v, Cpu: 100, Mem: 209715200 May 5 01:04:48.883: INFO: Pod for on the node: kindnet-m8hwr, Cpu: 100, Mem: 52428800 May 5 01:04:48.883: INFO: Pod for on the node: kube-proxy-kzswj, Cpu: 100, Mem: 209715200 May 5 01:04:48.883: INFO: Pod for on the node: b4a6b2a5-50b8-455e-ac80-8f2025927bc3-0, Cpu: 52599, Mem: 40302548582 May 5 01:04:48.883: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 5 01:04:48.883: INFO: Node: v125-worker, totalRequestedCPUResource: 52799, cpuAllocatableMil: 88000, cpuFraction: 0.5999886363636364 May 5 01:04:48.883: INFO: Node: v125-worker, totalRequestedMemResource: 40459834982, memAllocatableVal: 67412086784, memFraction: 0.6001866566101168 STEP: Compute Cpu, Mem Fraction after create balanced pods. 05/05/23 01:04:48.883 May 5 01:04:48.883: INFO: ComputeCPUMemFraction for node: v125-worker2 May 5 01:04:48.883: INFO: Pod for on the node: create-loop-devs-cfx6b, Cpu: 100, Mem: 209715200 May 5 01:04:48.883: INFO: Pod for on the node: kindnet-4spxt, Cpu: 100, Mem: 52428800 May 5 01:04:48.883: INFO: Pod for on the node: kube-proxy-df52h, Cpu: 100, Mem: 209715200 May 5 01:04:48.883: INFO: Pod for on the node: b233241d-9f51-4fab-a151-ef703f7c17ef-0, Cpu: 52599, Mem: 40302548582 May 5 01:04:48.883: INFO: Node: v125-worker2, totalRequestedCPUResource: 52799, cpuAllocatableMil: 88000, cpuFraction: 0.5999886363636364 May 5 01:04:48.883: INFO: Node: v125-worker2, totalRequestedMemResource: 40459834982, memAllocatableVal: 67412086784, memFraction: 0.6001866566101168 STEP: Trying to launch the pod with podAntiAffinity. 05/05/23 01:04:48.883 STEP: Wait the pod becomes running 05/05/23 01:04:48.889 May 5 01:04:48.889: INFO: Waiting up to 5m0s for pod "pod-with-pod-antiaffinity" in namespace "sched-priority-2135" to be "running" May 5 01:04:48.893: INFO: Pod "pod-with-pod-antiaffinity": Phase="Pending", Reason="", readiness=false. Elapsed: 3.488434ms May 5 01:04:50.898: INFO: Pod "pod-with-pod-antiaffinity": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008582169s May 5 01:04:52.898: INFO: Pod "pod-with-pod-antiaffinity": Phase="Running", Reason="", readiness=true. Elapsed: 4.008704884s May 5 01:04:52.898: INFO: Pod "pod-with-pod-antiaffinity" satisfied condition "running" STEP: Verify the pod was scheduled to the expected node. 05/05/23 01:04:52.901 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:187 May 5 01:04:54.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-2135" for this suite. 05/05/23 01:04:54.925 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","completed":6,"skipped":2165,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for"]} ------------------------------ • [SLOW TEST] [73.214 seconds] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms test/e2e/scheduling/priorities.go:124 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:03:41.716 May 5 01:03:41.716: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 05/05/23 01:03:41.717 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:03:41.728 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:03:41.731 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 May 5 01:03:41.735: INFO: Waiting up to 1m0s for all nodes to be ready May 5 01:04:41.762: INFO: Waiting for terminating namespaces to be deleted... May 5 01:04:41.766: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 5 01:04:41.779: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 5 01:04:41.779: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 5 01:04:41.785: INFO: ComputeCPUMemFraction for node: v125-worker May 5 01:04:41.785: INFO: Pod for on the node: create-loop-devs-9mv4v, Cpu: 100, Mem: 209715200 May 5 01:04:41.785: INFO: Pod for on the node: kindnet-m8hwr, Cpu: 100, Mem: 52428800 May 5 01:04:41.785: INFO: Pod for on the node: kube-proxy-kzswj, Cpu: 100, Mem: 209715200 May 5 01:04:41.785: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:04:41.785: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 May 5 01:04:41.785: INFO: ComputeCPUMemFraction for node: v125-worker2 May 5 01:04:41.785: INFO: Pod for on the node: create-loop-devs-cfx6b, Cpu: 100, Mem: 209715200 May 5 01:04:41.785: INFO: Pod for on the node: kindnet-4spxt, Cpu: 100, Mem: 52428800 May 5 01:04:41.785: INFO: Pod for on the node: kube-proxy-df52h, Cpu: 100, Mem: 209715200 May 5 01:04:41.785: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:04:41.785: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms test/e2e/scheduling/priorities.go:124 STEP: Trying to launch a pod with a label to get a node which can launch it. 05/05/23 01:04:41.785 May 5 01:04:41.795: INFO: Waiting up to 1m0s for pod "pod-with-label-security-s1" in namespace "sched-priority-2135" to be "running" May 5 01:04:41.798: INFO: Pod "pod-with-label-security-s1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.419857ms May 5 01:04:43.803: INFO: Pod "pod-with-label-security-s1": Phase="Running", Reason="", readiness=true. Elapsed: 2.007622928s May 5 01:04:43.803: INFO: Pod "pod-with-label-security-s1" satisfied condition "running" STEP: Verifying the node has a label kubernetes.io/hostname 05/05/23 01:04:43.806 May 5 01:04:43.817: INFO: ComputeCPUMemFraction for node: v125-worker May 5 01:04:43.817: INFO: Pod for on the node: create-loop-devs-9mv4v, Cpu: 100, Mem: 209715200 May 5 01:04:43.817: INFO: Pod for on the node: kindnet-m8hwr, Cpu: 100, Mem: 52428800 May 5 01:04:43.817: INFO: Pod for on the node: kube-proxy-kzswj, Cpu: 100, Mem: 209715200 May 5 01:04:43.817: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 5 01:04:43.817: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:04:43.817: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 May 5 01:04:43.817: INFO: ComputeCPUMemFraction for node: v125-worker2 May 5 01:04:43.817: INFO: Pod for on the node: create-loop-devs-cfx6b, Cpu: 100, Mem: 209715200 May 5 01:04:43.817: INFO: Pod for on the node: kindnet-4spxt, Cpu: 100, Mem: 52428800 May 5 01:04:43.817: INFO: Pod for on the node: kube-proxy-df52h, Cpu: 100, Mem: 209715200 May 5 01:04:43.817: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:04:43.817: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 May 5 01:04:43.823: INFO: Waiting for running... May 5 01:04:43.823: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. 05/05/23 01:04:48.883 May 5 01:04:48.883: INFO: ComputeCPUMemFraction for node: v125-worker May 5 01:04:48.883: INFO: Pod for on the node: create-loop-devs-9mv4v, Cpu: 100, Mem: 209715200 May 5 01:04:48.883: INFO: Pod for on the node: kindnet-m8hwr, Cpu: 100, Mem: 52428800 May 5 01:04:48.883: INFO: Pod for on the node: kube-proxy-kzswj, Cpu: 100, Mem: 209715200 May 5 01:04:48.883: INFO: Pod for on the node: b4a6b2a5-50b8-455e-ac80-8f2025927bc3-0, Cpu: 52599, Mem: 40302548582 May 5 01:04:48.883: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 5 01:04:48.883: INFO: Node: v125-worker, totalRequestedCPUResource: 52799, cpuAllocatableMil: 88000, cpuFraction: 0.5999886363636364 May 5 01:04:48.883: INFO: Node: v125-worker, totalRequestedMemResource: 40459834982, memAllocatableVal: 67412086784, memFraction: 0.6001866566101168 STEP: Compute Cpu, Mem Fraction after create balanced pods. 05/05/23 01:04:48.883 May 5 01:04:48.883: INFO: ComputeCPUMemFraction for node: v125-worker2 May 5 01:04:48.883: INFO: Pod for on the node: create-loop-devs-cfx6b, Cpu: 100, Mem: 209715200 May 5 01:04:48.883: INFO: Pod for on the node: kindnet-4spxt, Cpu: 100, Mem: 52428800 May 5 01:04:48.883: INFO: Pod for on the node: kube-proxy-df52h, Cpu: 100, Mem: 209715200 May 5 01:04:48.883: INFO: Pod for on the node: b233241d-9f51-4fab-a151-ef703f7c17ef-0, Cpu: 52599, Mem: 40302548582 May 5 01:04:48.883: INFO: Node: v125-worker2, totalRequestedCPUResource: 52799, cpuAllocatableMil: 88000, cpuFraction: 0.5999886363636364 May 5 01:04:48.883: INFO: Node: v125-worker2, totalRequestedMemResource: 40459834982, memAllocatableVal: 67412086784, memFraction: 0.6001866566101168 STEP: Trying to launch the pod with podAntiAffinity. 05/05/23 01:04:48.883 STEP: Wait the pod becomes running 05/05/23 01:04:48.889 May 5 01:04:48.889: INFO: Waiting up to 5m0s for pod "pod-with-pod-antiaffinity" in namespace "sched-priority-2135" to be "running" May 5 01:04:48.893: INFO: Pod "pod-with-pod-antiaffinity": Phase="Pending", Reason="", readiness=false. Elapsed: 3.488434ms May 5 01:04:50.898: INFO: Pod "pod-with-pod-antiaffinity": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008582169s May 5 01:04:52.898: INFO: Pod "pod-with-pod-antiaffinity": Phase="Running", Reason="", readiness=true. Elapsed: 4.008704884s May 5 01:04:52.898: INFO: Pod "pod-with-pod-antiaffinity" satisfied condition "running" STEP: Verify the pod was scheduled to the expected node. 05/05/23 01:04:52.901 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:187 May 5 01:04:54.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-2135" for this suite. 05/05/23 01:04:54.925 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Serial] test/e2e/scheduling/ubernetes_lite.go:77 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:04:54.945 May 5 01:04:54.945: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename multi-az 05/05/23 01:04:54.947 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:04:54.958 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:04:54.962 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:51 STEP: Checking for multi-zone cluster. Schedulable zone count = 0 05/05/23 01:04:54.97 May 5 01:04:54.971: INFO: Schedulable zone count is 0, only run for multi-zone clusters, skipping test [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/framework.go:187 May 5 01:04:54.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-9730" for this suite. 05/05/23 01:04:54.975 [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:72 ------------------------------ S [SKIPPED] [0.035 seconds] [sig-scheduling] Multi-AZ Clusters [BeforeEach] test/e2e/scheduling/ubernetes_lite.go:51 should spread the pods of a service across zones [Serial] test/e2e/scheduling/ubernetes_lite.go:77 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:04:54.945 May 5 01:04:54.945: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename multi-az 05/05/23 01:04:54.947 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:04:54.958 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:04:54.962 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:51 STEP: Checking for multi-zone cluster. Schedulable zone count = 0 05/05/23 01:04:54.97 May 5 01:04:54.971: INFO: Schedulable zone count is 0, only run for multi-zone clusters, skipping test [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/framework.go:187 May 5 01:04:54.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-9730" for this suite. 05/05/23 01:04:54.975 [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:72 << End Captured GinkgoWriter Output Schedulable zone count is 0, only run for multi-zone clusters, skipping test In [BeforeEach] at: test/e2e/scheduling/ubernetes_lite.go:61 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed test/e2e/scheduling/priorities.go:288 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:04:55.041 May 5 01:04:55.042: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 05/05/23 01:04:55.043 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:04:55.058 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:04:55.062 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 May 5 01:04:55.067: INFO: Waiting up to 1m0s for all nodes to be ready May 5 01:05:55.092: INFO: Waiting for terminating namespaces to be deleted... May 5 01:05:55.095: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 5 01:05:55.107: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 5 01:05:55.107: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 5 01:05:55.114: INFO: ComputeCPUMemFraction for node: v125-worker May 5 01:05:55.114: INFO: Pod for on the node: create-loop-devs-9mv4v, Cpu: 100, Mem: 209715200 May 5 01:05:55.114: INFO: Pod for on the node: kindnet-m8hwr, Cpu: 100, Mem: 52428800 May 5 01:05:55.114: INFO: Pod for on the node: kube-proxy-kzswj, Cpu: 100, Mem: 209715200 May 5 01:05:55.114: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:05:55.114: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 May 5 01:05:55.114: INFO: ComputeCPUMemFraction for node: v125-worker2 May 5 01:05:55.114: INFO: Pod for on the node: create-loop-devs-cfx6b, Cpu: 100, Mem: 209715200 May 5 01:05:55.114: INFO: Pod for on the node: kindnet-4spxt, Cpu: 100, Mem: 52428800 May 5 01:05:55.114: INFO: Pod for on the node: kube-proxy-df52h, Cpu: 100, Mem: 209715200 May 5 01:05:55.114: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:05:55.114: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 [BeforeEach] PodTopologySpread Scoring test/e2e/scheduling/priorities.go:271 STEP: Trying to get 2 available nodes which can run pod 05/05/23 01:05:55.114 STEP: Trying to launch a pod without a label to get a node which can launch it. 05/05/23 01:05:55.114 May 5 01:05:55.123: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-priority-9576" to be "running" May 5 01:05:55.126: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.627423ms May 5 01:05:57.131: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007797345s May 5 01:05:57.131: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:05:57.134 STEP: Trying to launch a pod without a label to get a node which can launch it. 05/05/23 01:05:57.143 May 5 01:05:57.148: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-priority-9576" to be "running" May 5 01:05:57.151: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.914471ms May 5 01:05:59.155: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.006932133s May 5 01:05:59.155: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:05:59.158 STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. 05/05/23 01:05:59.166 [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed test/e2e/scheduling/priorities.go:288 May 5 01:05:59.210: INFO: ComputeCPUMemFraction for node: v125-worker May 5 01:05:59.210: INFO: Pod for on the node: create-loop-devs-9mv4v, Cpu: 100, Mem: 209715200 May 5 01:05:59.210: INFO: Pod for on the node: kindnet-m8hwr, Cpu: 100, Mem: 52428800 May 5 01:05:59.210: INFO: Pod for on the node: kube-proxy-kzswj, Cpu: 100, Mem: 209715200 May 5 01:05:59.210: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:05:59.211: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 May 5 01:05:59.211: INFO: ComputeCPUMemFraction for node: v125-worker2 May 5 01:05:59.211: INFO: Pod for on the node: create-loop-devs-cfx6b, Cpu: 100, Mem: 209715200 May 5 01:05:59.211: INFO: Pod for on the node: kindnet-4spxt, Cpu: 100, Mem: 52428800 May 5 01:05:59.211: INFO: Pod for on the node: kube-proxy-df52h, Cpu: 100, Mem: 209715200 May 5 01:05:59.211: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:05:59.211: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 May 5 01:05:59.216: INFO: Waiting for running... May 5 01:05:59.216: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. 05/05/23 01:06:04.276 May 5 01:06:04.276: INFO: ComputeCPUMemFraction for node: v125-worker May 5 01:06:04.276: INFO: Pod for on the node: create-loop-devs-9mv4v, Cpu: 100, Mem: 209715200 May 5 01:06:04.277: INFO: Pod for on the node: kindnet-m8hwr, Cpu: 100, Mem: 52428800 May 5 01:06:04.277: INFO: Pod for on the node: kube-proxy-kzswj, Cpu: 100, Mem: 209715200 May 5 01:06:04.277: INFO: Pod for on the node: cfe6f835-fcdb-4a70-9f62-aa65f27a08bb-0, Cpu: 43800, Mem: 33561339904 May 5 01:06:04.277: INFO: Node: v125-worker, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 May 5 01:06:04.277: INFO: Node: v125-worker, totalRequestedMemResource: 33718626304, memAllocatableVal: 67412086784, memFraction: 0.5001866566160504 STEP: Compute Cpu, Mem Fraction after create balanced pods. 05/05/23 01:06:04.277 May 5 01:06:04.277: INFO: ComputeCPUMemFraction for node: v125-worker2 May 5 01:06:04.277: INFO: Pod for on the node: create-loop-devs-cfx6b, Cpu: 100, Mem: 209715200 May 5 01:06:04.277: INFO: Pod for on the node: kindnet-4spxt, Cpu: 100, Mem: 52428800 May 5 01:06:04.277: INFO: Pod for on the node: kube-proxy-df52h, Cpu: 100, Mem: 209715200 May 5 01:06:04.277: INFO: Pod for on the node: dea7e9cb-ca21-4647-8ceb-a41134de160f-0, Cpu: 43800, Mem: 33561339904 May 5 01:06:04.277: INFO: Node: v125-worker2, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 May 5 01:06:04.277: INFO: Node: v125-worker2, totalRequestedMemResource: 33718626304, memAllocatableVal: 67412086784, memFraction: 0.5001866566160504 STEP: Run a ReplicaSet with 4 replicas on node "v125-worker" 05/05/23 01:06:04.277 May 5 01:06:08.296: INFO: Waiting up to 1m0s for pod "test-pod" in namespace "sched-priority-9576" to be "running" May 5 01:06:08.299: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.066395ms May 5 01:06:10.305: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008294415s May 5 01:06:10.305: INFO: Pod "test-pod" satisfied condition "running" STEP: Verifying if the test-pod lands on node "v125-worker2" 05/05/23 01:06:10.308 [AfterEach] PodTopologySpread Scoring test/e2e/scheduling/priorities.go:282 STEP: removing the label kubernetes.io/e2e-pts-score off the node v125-worker 05/05/23 01:06:12.328 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score 05/05/23 01:06:12.341 STEP: removing the label kubernetes.io/e2e-pts-score off the node v125-worker2 05/05/23 01:06:12.344 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score 05/05/23 01:06:12.356 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:187 May 5 01:06:12.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-9576" for this suite. 05/05/23 01:06:12.364 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","completed":7,"skipped":3217,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for"]} ------------------------------ • [SLOW TEST] [77.327 seconds] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring test/e2e/scheduling/priorities.go:267 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed test/e2e/scheduling/priorities.go:288 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:04:55.041 May 5 01:04:55.042: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 05/05/23 01:04:55.043 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:04:55.058 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:04:55.062 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 May 5 01:04:55.067: INFO: Waiting up to 1m0s for all nodes to be ready May 5 01:05:55.092: INFO: Waiting for terminating namespaces to be deleted... May 5 01:05:55.095: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 5 01:05:55.107: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 5 01:05:55.107: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 5 01:05:55.114: INFO: ComputeCPUMemFraction for node: v125-worker May 5 01:05:55.114: INFO: Pod for on the node: create-loop-devs-9mv4v, Cpu: 100, Mem: 209715200 May 5 01:05:55.114: INFO: Pod for on the node: kindnet-m8hwr, Cpu: 100, Mem: 52428800 May 5 01:05:55.114: INFO: Pod for on the node: kube-proxy-kzswj, Cpu: 100, Mem: 209715200 May 5 01:05:55.114: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:05:55.114: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 May 5 01:05:55.114: INFO: ComputeCPUMemFraction for node: v125-worker2 May 5 01:05:55.114: INFO: Pod for on the node: create-loop-devs-cfx6b, Cpu: 100, Mem: 209715200 May 5 01:05:55.114: INFO: Pod for on the node: kindnet-4spxt, Cpu: 100, Mem: 52428800 May 5 01:05:55.114: INFO: Pod for on the node: kube-proxy-df52h, Cpu: 100, Mem: 209715200 May 5 01:05:55.114: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:05:55.114: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 [BeforeEach] PodTopologySpread Scoring test/e2e/scheduling/priorities.go:271 STEP: Trying to get 2 available nodes which can run pod 05/05/23 01:05:55.114 STEP: Trying to launch a pod without a label to get a node which can launch it. 05/05/23 01:05:55.114 May 5 01:05:55.123: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-priority-9576" to be "running" May 5 01:05:55.126: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.627423ms May 5 01:05:57.131: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007797345s May 5 01:05:57.131: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:05:57.134 STEP: Trying to launch a pod without a label to get a node which can launch it. 05/05/23 01:05:57.143 May 5 01:05:57.148: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-priority-9576" to be "running" May 5 01:05:57.151: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.914471ms May 5 01:05:59.155: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.006932133s May 5 01:05:59.155: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:05:59.158 STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. 05/05/23 01:05:59.166 [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed test/e2e/scheduling/priorities.go:288 May 5 01:05:59.210: INFO: ComputeCPUMemFraction for node: v125-worker May 5 01:05:59.210: INFO: Pod for on the node: create-loop-devs-9mv4v, Cpu: 100, Mem: 209715200 May 5 01:05:59.210: INFO: Pod for on the node: kindnet-m8hwr, Cpu: 100, Mem: 52428800 May 5 01:05:59.210: INFO: Pod for on the node: kube-proxy-kzswj, Cpu: 100, Mem: 209715200 May 5 01:05:59.210: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:05:59.211: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 May 5 01:05:59.211: INFO: ComputeCPUMemFraction for node: v125-worker2 May 5 01:05:59.211: INFO: Pod for on the node: create-loop-devs-cfx6b, Cpu: 100, Mem: 209715200 May 5 01:05:59.211: INFO: Pod for on the node: kindnet-4spxt, Cpu: 100, Mem: 52428800 May 5 01:05:59.211: INFO: Pod for on the node: kube-proxy-df52h, Cpu: 100, Mem: 209715200 May 5 01:05:59.211: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:05:59.211: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 May 5 01:05:59.216: INFO: Waiting for running... May 5 01:05:59.216: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. 05/05/23 01:06:04.276 May 5 01:06:04.276: INFO: ComputeCPUMemFraction for node: v125-worker May 5 01:06:04.276: INFO: Pod for on the node: create-loop-devs-9mv4v, Cpu: 100, Mem: 209715200 May 5 01:06:04.277: INFO: Pod for on the node: kindnet-m8hwr, Cpu: 100, Mem: 52428800 May 5 01:06:04.277: INFO: Pod for on the node: kube-proxy-kzswj, Cpu: 100, Mem: 209715200 May 5 01:06:04.277: INFO: Pod for on the node: cfe6f835-fcdb-4a70-9f62-aa65f27a08bb-0, Cpu: 43800, Mem: 33561339904 May 5 01:06:04.277: INFO: Node: v125-worker, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 May 5 01:06:04.277: INFO: Node: v125-worker, totalRequestedMemResource: 33718626304, memAllocatableVal: 67412086784, memFraction: 0.5001866566160504 STEP: Compute Cpu, Mem Fraction after create balanced pods. 05/05/23 01:06:04.277 May 5 01:06:04.277: INFO: ComputeCPUMemFraction for node: v125-worker2 May 5 01:06:04.277: INFO: Pod for on the node: create-loop-devs-cfx6b, Cpu: 100, Mem: 209715200 May 5 01:06:04.277: INFO: Pod for on the node: kindnet-4spxt, Cpu: 100, Mem: 52428800 May 5 01:06:04.277: INFO: Pod for on the node: kube-proxy-df52h, Cpu: 100, Mem: 209715200 May 5 01:06:04.277: INFO: Pod for on the node: dea7e9cb-ca21-4647-8ceb-a41134de160f-0, Cpu: 43800, Mem: 33561339904 May 5 01:06:04.277: INFO: Node: v125-worker2, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 May 5 01:06:04.277: INFO: Node: v125-worker2, totalRequestedMemResource: 33718626304, memAllocatableVal: 67412086784, memFraction: 0.5001866566160504 STEP: Run a ReplicaSet with 4 replicas on node "v125-worker" 05/05/23 01:06:04.277 May 5 01:06:08.296: INFO: Waiting up to 1m0s for pod "test-pod" in namespace "sched-priority-9576" to be "running" May 5 01:06:08.299: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.066395ms May 5 01:06:10.305: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008294415s May 5 01:06:10.305: INFO: Pod "test-pod" satisfied condition "running" STEP: Verifying if the test-pod lands on node "v125-worker2" 05/05/23 01:06:10.308 [AfterEach] PodTopologySpread Scoring test/e2e/scheduling/priorities.go:282 STEP: removing the label kubernetes.io/e2e-pts-score off the node v125-worker 05/05/23 01:06:12.328 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score 05/05/23 01:06:12.341 STEP: removing the label kubernetes.io/e2e-pts-score off the node v125-worker2 05/05/23 01:06:12.344 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score 05/05/23 01:06:12.356 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:187 May 5 01:06:12.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-9576" for this suite. 05/05/23 01:06:12.364 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching test/e2e/scheduling/predicates.go:582 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:06:12.379 May 5 01:06:12.379: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 05/05/23 01:06:12.38 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:06:12.392 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:06:12.396 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 May 5 01:06:12.399: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 01:06:12.405: INFO: Waiting for terminating namespaces to be deleted... May 5 01:06:12.408: INFO: Logging pods the apiserver thinks is on node v125-worker before test May 5 01:06:12.414: INFO: create-loop-devs-9mv4v from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:06:12.414: INFO: Container loopdev ready: true, restart count 0 May 5 01:06:12.414: INFO: kindnet-m8hwr from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:06:12.414: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:06:12.414: INFO: kube-proxy-kzswj from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:06:12.414: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:06:12.414: INFO: rs-e2e-pts-score-2m5s8 from sched-priority-9576 started at 2023-05-05 01:06:04 +0000 UTC (1 container statuses recorded) May 5 01:06:12.414: INFO: Container e2e-pts-score ready: true, restart count 0 May 5 01:06:12.414: INFO: rs-e2e-pts-score-fcf8n from sched-priority-9576 started at 2023-05-05 01:06:04 +0000 UTC (1 container statuses recorded) May 5 01:06:12.414: INFO: Container e2e-pts-score ready: true, restart count 0 May 5 01:06:12.414: INFO: rs-e2e-pts-score-gjnxv from sched-priority-9576 started at 2023-05-05 01:06:04 +0000 UTC (1 container statuses recorded) May 5 01:06:12.414: INFO: Container e2e-pts-score ready: true, restart count 0 May 5 01:06:12.414: INFO: rs-e2e-pts-score-z5jb4 from sched-priority-9576 started at 2023-05-05 01:06:04 +0000 UTC (1 container statuses recorded) May 5 01:06:12.414: INFO: Container e2e-pts-score ready: true, restart count 0 May 5 01:06:12.414: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test May 5 01:06:12.419: INFO: create-loop-devs-cfx6b from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:06:12.419: INFO: Container loopdev ready: true, restart count 0 May 5 01:06:12.419: INFO: kindnet-4spxt from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:06:12.419: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:06:12.419: INFO: kube-proxy-df52h from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:06:12.419: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:06:12.419: INFO: test-pod from sched-priority-9576 started at 2023-05-05 01:06:08 +0000 UTC (1 container statuses recorded) May 5 01:06:12.419: INFO: Container test-pod ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching test/e2e/scheduling/predicates.go:582 STEP: Trying to launch a pod without a toleration to get a node which can launch it. 05/05/23 01:06:12.419 May 5 01:06:12.425: INFO: Waiting up to 1m0s for pod "without-toleration" in namespace "sched-pred-7823" to be "running" May 5 01:06:12.428: INFO: Pod "without-toleration": Phase="Pending", Reason="", readiness=false. Elapsed: 2.765903ms May 5 01:06:14.433: INFO: Pod "without-toleration": Phase="Running", Reason="", readiness=true. Elapsed: 2.007822727s May 5 01:06:14.433: INFO: Pod "without-toleration" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:06:14.436 STEP: Trying to apply a random taint on the found node. 05/05/23 01:06:14.444 STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-c614b47b-b7bc-4de9-9564-a33ba1b1cdb2=testing-taint-value:NoSchedule 05/05/23 01:06:14.459 STEP: Trying to apply a random label on the found node. 05/05/23 01:06:14.463 STEP: verifying the node has the label kubernetes.io/e2e-label-key-41acaa77-c1fe-4244-825b-064e8a0056b5 testing-label-value 05/05/23 01:06:14.475 STEP: Trying to relaunch the pod, now with tolerations. 05/05/23 01:06:14.478 May 5 01:06:14.483: INFO: Waiting up to 5m0s for pod "with-tolerations" in namespace "sched-pred-7823" to be "not pending" May 5 01:06:14.486: INFO: Pod "with-tolerations": Phase="Pending", Reason="", readiness=false. Elapsed: 3.38494ms May 5 01:06:16.491: INFO: Pod "with-tolerations": Phase="Running", Reason="", readiness=true. Elapsed: 2.008421648s May 5 01:06:16.491: INFO: Pod "with-tolerations" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-label-key-41acaa77-c1fe-4244-825b-064e8a0056b5 off the node v125-worker2 05/05/23 01:06:16.494 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-41acaa77-c1fe-4244-825b-064e8a0056b5 05/05/23 01:06:16.509 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-c614b47b-b7bc-4de9-9564-a33ba1b1cdb2=testing-taint-value:NoSchedule 05/05/23 01:06:16.528 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 May 5 01:06:16.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7823" for this suite. 05/05/23 01:06:16.536 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","completed":8,"skipped":3309,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for"]} ------------------------------ • [4.162 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching test/e2e/scheduling/predicates.go:582 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:06:12.379 May 5 01:06:12.379: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 05/05/23 01:06:12.38 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:06:12.392 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:06:12.396 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 May 5 01:06:12.399: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 01:06:12.405: INFO: Waiting for terminating namespaces to be deleted... May 5 01:06:12.408: INFO: Logging pods the apiserver thinks is on node v125-worker before test May 5 01:06:12.414: INFO: create-loop-devs-9mv4v from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:06:12.414: INFO: Container loopdev ready: true, restart count 0 May 5 01:06:12.414: INFO: kindnet-m8hwr from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:06:12.414: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:06:12.414: INFO: kube-proxy-kzswj from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:06:12.414: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:06:12.414: INFO: rs-e2e-pts-score-2m5s8 from sched-priority-9576 started at 2023-05-05 01:06:04 +0000 UTC (1 container statuses recorded) May 5 01:06:12.414: INFO: Container e2e-pts-score ready: true, restart count 0 May 5 01:06:12.414: INFO: rs-e2e-pts-score-fcf8n from sched-priority-9576 started at 2023-05-05 01:06:04 +0000 UTC (1 container statuses recorded) May 5 01:06:12.414: INFO: Container e2e-pts-score ready: true, restart count 0 May 5 01:06:12.414: INFO: rs-e2e-pts-score-gjnxv from sched-priority-9576 started at 2023-05-05 01:06:04 +0000 UTC (1 container statuses recorded) May 5 01:06:12.414: INFO: Container e2e-pts-score ready: true, restart count 0 May 5 01:06:12.414: INFO: rs-e2e-pts-score-z5jb4 from sched-priority-9576 started at 2023-05-05 01:06:04 +0000 UTC (1 container statuses recorded) May 5 01:06:12.414: INFO: Container e2e-pts-score ready: true, restart count 0 May 5 01:06:12.414: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test May 5 01:06:12.419: INFO: create-loop-devs-cfx6b from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:06:12.419: INFO: Container loopdev ready: true, restart count 0 May 5 01:06:12.419: INFO: kindnet-4spxt from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:06:12.419: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:06:12.419: INFO: kube-proxy-df52h from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:06:12.419: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:06:12.419: INFO: test-pod from sched-priority-9576 started at 2023-05-05 01:06:08 +0000 UTC (1 container statuses recorded) May 5 01:06:12.419: INFO: Container test-pod ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching test/e2e/scheduling/predicates.go:582 STEP: Trying to launch a pod without a toleration to get a node which can launch it. 05/05/23 01:06:12.419 May 5 01:06:12.425: INFO: Waiting up to 1m0s for pod "without-toleration" in namespace "sched-pred-7823" to be "running" May 5 01:06:12.428: INFO: Pod "without-toleration": Phase="Pending", Reason="", readiness=false. Elapsed: 2.765903ms May 5 01:06:14.433: INFO: Pod "without-toleration": Phase="Running", Reason="", readiness=true. Elapsed: 2.007822727s May 5 01:06:14.433: INFO: Pod "without-toleration" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:06:14.436 STEP: Trying to apply a random taint on the found node. 05/05/23 01:06:14.444 STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-c614b47b-b7bc-4de9-9564-a33ba1b1cdb2=testing-taint-value:NoSchedule 05/05/23 01:06:14.459 STEP: Trying to apply a random label on the found node. 05/05/23 01:06:14.463 STEP: verifying the node has the label kubernetes.io/e2e-label-key-41acaa77-c1fe-4244-825b-064e8a0056b5 testing-label-value 05/05/23 01:06:14.475 STEP: Trying to relaunch the pod, now with tolerations. 05/05/23 01:06:14.478 May 5 01:06:14.483: INFO: Waiting up to 5m0s for pod "with-tolerations" in namespace "sched-pred-7823" to be "not pending" May 5 01:06:14.486: INFO: Pod "with-tolerations": Phase="Pending", Reason="", readiness=false. Elapsed: 3.38494ms May 5 01:06:16.491: INFO: Pod "with-tolerations": Phase="Running", Reason="", readiness=true. Elapsed: 2.008421648s May 5 01:06:16.491: INFO: Pod "with-tolerations" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-label-key-41acaa77-c1fe-4244-825b-064e8a0056b5 off the node v125-worker2 05/05/23 01:06:16.494 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-41acaa77-c1fe-4244-825b-064e8a0056b5 05/05/23 01:06:16.509 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-c614b47b-b7bc-4de9-9564-a33ba1b1cdb2=testing-taint-value:NoSchedule 05/05/23 01:06:16.528 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 May 5 01:06:16.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7823" for this suite. 05/05/23 01:06:16.536 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching test/e2e/scheduling/predicates.go:493 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:06:16.58 May 5 01:06:16.580: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 05/05/23 01:06:16.582 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:06:16.592 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:06:16.596 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 May 5 01:06:16.600: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 01:06:16.607: INFO: Waiting for terminating namespaces to be deleted... May 5 01:06:16.610: INFO: Logging pods the apiserver thinks is on node v125-worker before test May 5 01:06:16.616: INFO: create-loop-devs-9mv4v from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:06:16.616: INFO: Container loopdev ready: true, restart count 0 May 5 01:06:16.616: INFO: kindnet-m8hwr from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:06:16.616: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:06:16.616: INFO: kube-proxy-kzswj from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:06:16.616: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:06:16.616: INFO: rs-e2e-pts-score-2m5s8 from sched-priority-9576 started at 2023-05-05 01:06:04 +0000 UTC (1 container statuses recorded) May 5 01:06:16.616: INFO: Container e2e-pts-score ready: true, restart count 0 May 5 01:06:16.616: INFO: rs-e2e-pts-score-fcf8n from sched-priority-9576 started at 2023-05-05 01:06:04 +0000 UTC (1 container statuses recorded) May 5 01:06:16.616: INFO: Container e2e-pts-score ready: true, restart count 0 May 5 01:06:16.616: INFO: rs-e2e-pts-score-gjnxv from sched-priority-9576 started at 2023-05-05 01:06:04 +0000 UTC (1 container statuses recorded) May 5 01:06:16.616: INFO: Container e2e-pts-score ready: true, restart count 0 May 5 01:06:16.616: INFO: rs-e2e-pts-score-z5jb4 from sched-priority-9576 started at 2023-05-05 01:06:04 +0000 UTC (1 container statuses recorded) May 5 01:06:16.616: INFO: Container e2e-pts-score ready: true, restart count 0 May 5 01:06:16.616: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test May 5 01:06:16.622: INFO: create-loop-devs-cfx6b from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:06:16.622: INFO: Container loopdev ready: true, restart count 0 May 5 01:06:16.622: INFO: kindnet-4spxt from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:06:16.622: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:06:16.622: INFO: kube-proxy-df52h from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:06:16.622: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:06:16.622: INFO: with-tolerations from sched-pred-7823 started at 2023-05-05 01:06:14 +0000 UTC (1 container statuses recorded) May 5 01:06:16.622: INFO: Container with-tolerations ready: true, restart count 0 May 5 01:06:16.622: INFO: test-pod from sched-priority-9576 started at 2023-05-05 01:06:08 +0000 UTC (1 container statuses recorded) May 5 01:06:16.622: INFO: Container test-pod ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching test/e2e/scheduling/predicates.go:493 STEP: Trying to schedule Pod with nonempty NodeSelector. 05/05/23 01:06:16.622 STEP: Considering event: Type = [Warning], Name = [restricted-pod.175c19d8184e8231], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.] 05/05/23 01:06:16.644 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 May 5 01:06:17.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6142" for this suite. 05/05/23 01:06:17.647 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","completed":9,"skipped":3940,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for"]} ------------------------------ • [1.070 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that NodeAffinity is respected if not matching test/e2e/scheduling/predicates.go:493 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:06:16.58 May 5 01:06:16.580: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 05/05/23 01:06:16.582 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:06:16.592 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:06:16.596 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 May 5 01:06:16.600: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 01:06:16.607: INFO: Waiting for terminating namespaces to be deleted... May 5 01:06:16.610: INFO: Logging pods the apiserver thinks is on node v125-worker before test May 5 01:06:16.616: INFO: create-loop-devs-9mv4v from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:06:16.616: INFO: Container loopdev ready: true, restart count 0 May 5 01:06:16.616: INFO: kindnet-m8hwr from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:06:16.616: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:06:16.616: INFO: kube-proxy-kzswj from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:06:16.616: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:06:16.616: INFO: rs-e2e-pts-score-2m5s8 from sched-priority-9576 started at 2023-05-05 01:06:04 +0000 UTC (1 container statuses recorded) May 5 01:06:16.616: INFO: Container e2e-pts-score ready: true, restart count 0 May 5 01:06:16.616: INFO: rs-e2e-pts-score-fcf8n from sched-priority-9576 started at 2023-05-05 01:06:04 +0000 UTC (1 container statuses recorded) May 5 01:06:16.616: INFO: Container e2e-pts-score ready: true, restart count 0 May 5 01:06:16.616: INFO: rs-e2e-pts-score-gjnxv from sched-priority-9576 started at 2023-05-05 01:06:04 +0000 UTC (1 container statuses recorded) May 5 01:06:16.616: INFO: Container e2e-pts-score ready: true, restart count 0 May 5 01:06:16.616: INFO: rs-e2e-pts-score-z5jb4 from sched-priority-9576 started at 2023-05-05 01:06:04 +0000 UTC (1 container statuses recorded) May 5 01:06:16.616: INFO: Container e2e-pts-score ready: true, restart count 0 May 5 01:06:16.616: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test May 5 01:06:16.622: INFO: create-loop-devs-cfx6b from kube-system started at 2023-03-27 13:20:36 +0000 UTC (1 container statuses recorded) May 5 01:06:16.622: INFO: Container loopdev ready: true, restart count 0 May 5 01:06:16.622: INFO: kindnet-4spxt from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:06:16.622: INFO: Container kindnet-cni ready: true, restart count 0 May 5 01:06:16.622: INFO: kube-proxy-df52h from kube-system started at 2023-03-27 13:20:32 +0000 UTC (1 container statuses recorded) May 5 01:06:16.622: INFO: Container kube-proxy ready: true, restart count 0 May 5 01:06:16.622: INFO: with-tolerations from sched-pred-7823 started at 2023-05-05 01:06:14 +0000 UTC (1 container statuses recorded) May 5 01:06:16.622: INFO: Container with-tolerations ready: true, restart count 0 May 5 01:06:16.622: INFO: test-pod from sched-priority-9576 started at 2023-05-05 01:06:08 +0000 UTC (1 container statuses recorded) May 5 01:06:16.622: INFO: Container test-pod ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching test/e2e/scheduling/predicates.go:493 STEP: Trying to schedule Pod with nonempty NodeSelector. 05/05/23 01:06:16.622 STEP: Considering event: Type = [Warning], Name = [restricted-pod.175c19d8184e8231], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.] 05/05/23 01:06:16.644 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 May 5 01:06:17.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6142" for this suite. 05/05/23 01:06:17.647 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted test/e2e/scheduling/preemption.go:355 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:06:17.702 May 5 01:06:17.702: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 05/05/23 01:06:17.703 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:06:17.71 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:06:17.712 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 May 5 01:06:17.722: INFO: Waiting up to 1m0s for all nodes to be ready May 5 01:07:17.748: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:322 STEP: Trying to get 2 available nodes which can run pod 05/05/23 01:07:17.752 STEP: Trying to launch a pod without a label to get a node which can launch it. 05/05/23 01:07:17.752 May 5 01:07:17.763: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-8139" to be "running" May 5 01:07:17.766: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.936034ms May 5 01:07:19.771: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.008111915s May 5 01:07:19.771: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:07:19.774 STEP: Trying to launch a pod without a label to get a node which can launch it. 05/05/23 01:07:19.785 May 5 01:07:19.790: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-8139" to be "running" May 5 01:07:19.793: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.951664ms May 5 01:07:21.798: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.00755764s May 5 01:07:21.798: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:07:21.801 STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. 05/05/23 01:07:21.807 STEP: Apply 10 fake resource to node v125-worker. 05/05/23 01:07:21.818 STEP: Apply 10 fake resource to node v125-worker2. 05/05/23 01:07:21.844 [It] validates proper pods are preempted test/e2e/scheduling/preemption.go:355 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. 05/05/23 01:07:21.854 May 5 01:07:21.859: INFO: Waiting up to 1m0s for pod "high" in namespace "sched-preemption-8139" to be "running" May 5 01:07:21.861: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 2.304923ms May 5 01:07:23.865: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006190848s May 5 01:07:25.866: INFO: Pod "high": Phase="Running", Reason="", readiness=true. Elapsed: 4.007458012s May 5 01:07:25.866: INFO: Pod "high" satisfied condition "running" May 5 01:07:25.874: INFO: Waiting up to 1m0s for pod "low-1" in namespace "sched-preemption-8139" to be "running" May 5 01:07:25.878: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.09754ms May 5 01:07:27.883: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008039107s May 5 01:07:29.883: INFO: Pod "low-1": Phase="Running", Reason="", readiness=true. Elapsed: 4.008205776s May 5 01:07:29.883: INFO: Pod "low-1" satisfied condition "running" May 5 01:07:29.891: INFO: Waiting up to 1m0s for pod "low-2" in namespace "sched-preemption-8139" to be "running" May 5 01:07:29.894: INFO: Pod "low-2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.982418ms May 5 01:07:31.899: INFO: Pod "low-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007475217s May 5 01:07:31.899: INFO: Pod "low-2" satisfied condition "running" May 5 01:07:31.907: INFO: Waiting up to 1m0s for pod "low-3" in namespace "sched-preemption-8139" to be "running" May 5 01:07:31.911: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.647485ms May 5 01:07:33.914: INFO: Pod "low-3": Phase="Running", Reason="", readiness=true. Elapsed: 2.007577907s May 5 01:07:33.915: INFO: Pod "low-3" satisfied condition "running" STEP: Create 1 Medium Pod with TopologySpreadConstraints 05/05/23 01:07:33.918 May 5 01:07:33.923: INFO: Waiting up to 1m0s for pod "medium" in namespace "sched-preemption-8139" to be "running" May 5 01:07:33.927: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 3.512166ms May 5 01:07:35.931: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007784894s May 5 01:07:37.933: INFO: Pod "medium": Phase="Running", Reason="", readiness=true. Elapsed: 4.009475327s May 5 01:07:37.933: INFO: Pod "medium" satisfied condition "running" STEP: Verify there are 3 Pods left in this namespace 05/05/23 01:07:37.936 STEP: Pod "high" is as expected to be running. 05/05/23 01:07:37.94 STEP: Pod "low-1" is as expected to be running. 05/05/23 01:07:37.94 STEP: Pod "medium" is as expected to be running. 05/05/23 01:07:37.941 [AfterEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:343 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node v125-worker 05/05/23 01:07:37.941 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption 05/05/23 01:07:37.954 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node v125-worker2 05/05/23 01:07:37.958 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption 05/05/23 01:07:37.971 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 May 5 01:07:37.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8139" for this suite. 05/05/23 01:07:38 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","completed":10,"skipped":4988,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for"]} ------------------------------ • [SLOW TEST] [80.339 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption test/e2e/scheduling/preemption.go:316 validates proper pods are preempted test/e2e/scheduling/preemption.go:355 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:06:17.702 May 5 01:06:17.702: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 05/05/23 01:06:17.703 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:06:17.71 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:06:17.712 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 May 5 01:06:17.722: INFO: Waiting up to 1m0s for all nodes to be ready May 5 01:07:17.748: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:322 STEP: Trying to get 2 available nodes which can run pod 05/05/23 01:07:17.752 STEP: Trying to launch a pod without a label to get a node which can launch it. 05/05/23 01:07:17.752 May 5 01:07:17.763: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-8139" to be "running" May 5 01:07:17.766: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.936034ms May 5 01:07:19.771: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.008111915s May 5 01:07:19.771: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:07:19.774 STEP: Trying to launch a pod without a label to get a node which can launch it. 05/05/23 01:07:19.785 May 5 01:07:19.790: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-8139" to be "running" May 5 01:07:19.793: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.951664ms May 5 01:07:21.798: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.00755764s May 5 01:07:21.798: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 05/05/23 01:07:21.801 STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. 05/05/23 01:07:21.807 STEP: Apply 10 fake resource to node v125-worker. 05/05/23 01:07:21.818 STEP: Apply 10 fake resource to node v125-worker2. 05/05/23 01:07:21.844 [It] validates proper pods are preempted test/e2e/scheduling/preemption.go:355 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. 05/05/23 01:07:21.854 May 5 01:07:21.859: INFO: Waiting up to 1m0s for pod "high" in namespace "sched-preemption-8139" to be "running" May 5 01:07:21.861: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 2.304923ms May 5 01:07:23.865: INFO: Pod "high": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006190848s May 5 01:07:25.866: INFO: Pod "high": Phase="Running", Reason="", readiness=true. Elapsed: 4.007458012s May 5 01:07:25.866: INFO: Pod "high" satisfied condition "running" May 5 01:07:25.874: INFO: Waiting up to 1m0s for pod "low-1" in namespace "sched-preemption-8139" to be "running" May 5 01:07:25.878: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.09754ms May 5 01:07:27.883: INFO: Pod "low-1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008039107s May 5 01:07:29.883: INFO: Pod "low-1": Phase="Running", Reason="", readiness=true. Elapsed: 4.008205776s May 5 01:07:29.883: INFO: Pod "low-1" satisfied condition "running" May 5 01:07:29.891: INFO: Waiting up to 1m0s for pod "low-2" in namespace "sched-preemption-8139" to be "running" May 5 01:07:29.894: INFO: Pod "low-2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.982418ms May 5 01:07:31.899: INFO: Pod "low-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007475217s May 5 01:07:31.899: INFO: Pod "low-2" satisfied condition "running" May 5 01:07:31.907: INFO: Waiting up to 1m0s for pod "low-3" in namespace "sched-preemption-8139" to be "running" May 5 01:07:31.911: INFO: Pod "low-3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.647485ms May 5 01:07:33.914: INFO: Pod "low-3": Phase="Running", Reason="", readiness=true. Elapsed: 2.007577907s May 5 01:07:33.915: INFO: Pod "low-3" satisfied condition "running" STEP: Create 1 Medium Pod with TopologySpreadConstraints 05/05/23 01:07:33.918 May 5 01:07:33.923: INFO: Waiting up to 1m0s for pod "medium" in namespace "sched-preemption-8139" to be "running" May 5 01:07:33.927: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 3.512166ms May 5 01:07:35.931: INFO: Pod "medium": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007784894s May 5 01:07:37.933: INFO: Pod "medium": Phase="Running", Reason="", readiness=true. Elapsed: 4.009475327s May 5 01:07:37.933: INFO: Pod "medium" satisfied condition "running" STEP: Verify there are 3 Pods left in this namespace 05/05/23 01:07:37.936 STEP: Pod "high" is as expected to be running. 05/05/23 01:07:37.94 STEP: Pod "low-1" is as expected to be running. 05/05/23 01:07:37.94 STEP: Pod "medium" is as expected to be running. 05/05/23 01:07:37.941 [AfterEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:343 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node v125-worker 05/05/23 01:07:37.941 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption 05/05/23 01:07:37.954 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node v125-worker2 05/05/23 01:07:37.958 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption 05/05/23 01:07:37.971 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 May 5 01:07:37.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8139" for this suite. 05/05/23 01:07:38 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate test/e2e/scheduling/priorities.go:208 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:07:38.12 May 5 01:07:38.120: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 05/05/23 01:07:38.121 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:07:38.131 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:07:38.136 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 May 5 01:07:38.140: INFO: Waiting up to 1m0s for all nodes to be ready May 5 01:08:38.171: INFO: Waiting for terminating namespaces to be deleted... May 5 01:08:38.174: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 5 01:08:38.187: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 5 01:08:38.187: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 5 01:08:38.194: INFO: ComputeCPUMemFraction for node: v125-worker May 5 01:08:38.194: INFO: Pod for on the node: create-loop-devs-9mv4v, Cpu: 100, Mem: 209715200 May 5 01:08:38.195: INFO: Pod for on the node: kindnet-m8hwr, Cpu: 100, Mem: 52428800 May 5 01:08:38.195: INFO: Pod for on the node: kube-proxy-kzswj, Cpu: 100, Mem: 209715200 May 5 01:08:38.195: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:08:38.195: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 May 5 01:08:38.195: INFO: ComputeCPUMemFraction for node: v125-worker2 May 5 01:08:38.195: INFO: Pod for on the node: create-loop-devs-cfx6b, Cpu: 100, Mem: 209715200 May 5 01:08:38.195: INFO: Pod for on the node: kindnet-4spxt, Cpu: 100, Mem: 52428800 May 5 01:08:38.195: INFO: Pod for on the node: kube-proxy-df52h, Cpu: 100, Mem: 209715200 May 5 01:08:38.195: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:08:38.195: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 [It] Pod should be preferably scheduled to nodes pod can tolerate test/e2e/scheduling/priorities.go:208 May 5 01:08:38.202: INFO: ComputeCPUMemFraction for node: v125-worker May 5 01:08:38.202: INFO: Pod for on the node: create-loop-devs-9mv4v, Cpu: 100, Mem: 209715200 May 5 01:08:38.202: INFO: Pod for on the node: kindnet-m8hwr, Cpu: 100, Mem: 52428800 May 5 01:08:38.202: INFO: Pod for on the node: kube-proxy-kzswj, Cpu: 100, Mem: 209715200 May 5 01:08:38.202: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:08:38.202: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 May 5 01:08:38.202: INFO: ComputeCPUMemFraction for node: v125-worker2 May 5 01:08:38.202: INFO: Pod for on the node: create-loop-devs-cfx6b, Cpu: 100, Mem: 209715200 May 5 01:08:38.202: INFO: Pod for on the node: kindnet-4spxt, Cpu: 100, Mem: 52428800 May 5 01:08:38.202: INFO: Pod for on the node: kube-proxy-df52h, Cpu: 100, Mem: 209715200 May 5 01:08:38.202: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:08:38.202: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 May 5 01:08:38.213: INFO: Waiting for running... May 5 01:08:38.213: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. 05/05/23 01:08:43.273 May 5 01:08:43.273: INFO: ComputeCPUMemFraction for node: v125-worker May 5 01:08:43.273: INFO: Pod for on the node: create-loop-devs-9mv4v, Cpu: 100, Mem: 209715200 May 5 01:08:43.273: INFO: Pod for on the node: kindnet-m8hwr, Cpu: 100, Mem: 52428800 May 5 01:08:43.273: INFO: Pod for on the node: kube-proxy-kzswj, Cpu: 100, Mem: 209715200 May 5 01:08:43.273: INFO: Pod for on the node: b924d38b-38d9-4791-a0d1-c21d952d1dce-0, Cpu: 43800, Mem: 33561339904 May 5 01:08:43.273: INFO: Node: v125-worker, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 May 5 01:08:43.273: INFO: Node: v125-worker, totalRequestedMemResource: 33718626304, memAllocatableVal: 67412086784, memFraction: 0.5001866566160504 STEP: Compute Cpu, Mem Fraction after create balanced pods. 05/05/23 01:08:43.273 May 5 01:08:43.273: INFO: ComputeCPUMemFraction for node: v125-worker2 May 5 01:08:43.273: INFO: Pod for on the node: create-loop-devs-cfx6b, Cpu: 100, Mem: 209715200 May 5 01:08:43.273: INFO: Pod for on the node: kindnet-4spxt, Cpu: 100, Mem: 52428800 May 5 01:08:43.273: INFO: Pod for on the node: kube-proxy-df52h, Cpu: 100, Mem: 209715200 May 5 01:08:43.273: INFO: Pod for on the node: da2f5eff-ca47-488f-9271-eb1bff19312e-0, Cpu: 43800, Mem: 33561339904 May 5 01:08:43.273: INFO: Node: v125-worker2, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 May 5 01:08:43.273: INFO: Node: v125-worker2, totalRequestedMemResource: 33718626304, memAllocatableVal: 67412086784, memFraction: 0.5001866566160504 STEP: Trying to apply 10 (tolerable) taints on the first node. 05/05/23 01:08:43.274 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4a5fd346-199b-45d5-9ebd=testing-taint-value-18ec07c3-e7f0-4326-ae15-58cb2d7595d3:PreferNoSchedule 05/05/23 01:08:43.289 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-d8ac07f4-61be-4000-b249=testing-taint-value-53cc9ea2-0d8d-4d0a-98f8-bdd447040249:PreferNoSchedule 05/05/23 01:08:43.308 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-d04049f9-9087-4304-a003=testing-taint-value-1f90df3f-23bf-42ff-988b-f6114e1597b5:PreferNoSchedule 05/05/23 01:08:43.331 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-acb12324-fe71-42e5-b7c5=testing-taint-value-ce4932de-064e-44ad-b6bc-3918b2039489:PreferNoSchedule 05/05/23 01:08:43.35 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-d358877f-1117-4212-a8e9=testing-taint-value-23a03bf8-fe4f-438f-baaa-8f0a887f3ef2:PreferNoSchedule 05/05/23 01:08:43.369 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-a31987d7-895f-4220-97a7=testing-taint-value-acd8060d-0b12-480e-ba0b-4fba51acdc99:PreferNoSchedule 05/05/23 01:08:43.387 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-38bb15ee-7b21-4759-933b=testing-taint-value-d1f51a2c-4c47-46df-b1fd-03c9b27d4654:PreferNoSchedule 05/05/23 01:08:43.406 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-44916ac3-b3be-406d-8fe9=testing-taint-value-2b96e793-0c9d-4f45-b7fe-26504b47f30d:PreferNoSchedule 05/05/23 01:08:43.425 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4c2ef62f-1632-4d90-8a67=testing-taint-value-84b24a82-393d-4ba5-aad3-5f77bad5dde7:PreferNoSchedule 05/05/23 01:08:43.444 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b0e73701-8b13-43f4-b252=testing-taint-value-a6c42aec-cd46-460b-84fa-472527b8d2e4:PreferNoSchedule 05/05/23 01:08:43.463 STEP: Adding 10 intolerable taints to all other nodes 05/05/23 01:08:43.466 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4dea0131-5a29-4945-89ba=testing-taint-value-9f4fd6d4-4d58-452a-995d-b43960726cef:PreferNoSchedule 05/05/23 01:08:43.48 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c8f4ac4a-8ced-4b5b-8825=testing-taint-value-a7adbaf6-f7cf-40f5-9845-be05e13a9b87:PreferNoSchedule 05/05/23 01:08:43.499 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-a7fb6b8b-4050-4e8c-beb7=testing-taint-value-d67f1cf3-828d-4662-8110-c57a021f8df7:PreferNoSchedule 05/05/23 01:08:43.517 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c00a5931-0daf-4756-9e76=testing-taint-value-78c136d5-382b-4b85-942f-cc4e44a0fdd5:PreferNoSchedule 05/05/23 01:08:43.535 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-bf7b5e95-6331-4de5-9fed=testing-taint-value-902c3dfb-201e-4af5-bc69-b836e1d14bda:PreferNoSchedule 05/05/23 01:08:43.554 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-9cbdb06b-0bbc-43e7-a0f5=testing-taint-value-47289a54-a06e-46c0-af88-4cfcaada1fc2:PreferNoSchedule 05/05/23 01:08:43.572 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-d4ee2fac-a7d9-4fdd-89ed=testing-taint-value-87837a9f-5e96-4995-bbe4-3a42c89b4282:PreferNoSchedule 05/05/23 01:08:43.594 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-f8846f3b-d14a-454e-b770=testing-taint-value-0ea553a0-fcc4-4e18-bb0a-9b709a43f5d4:PreferNoSchedule 05/05/23 01:08:43.613 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-cd229246-08e9-4acd-a0a1=testing-taint-value-722c1343-8518-4d1e-af82-03d884f1a130:PreferNoSchedule 05/05/23 01:08:43.632 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-87636e32-6d0f-40c3-9108=testing-taint-value-29361f30-fa3b-4088-9461-bcaf63498256:PreferNoSchedule 05/05/23 01:08:43.777 STEP: Create a pod that tolerates all the taints of the first node. 05/05/23 01:08:43.818 May 5 01:08:43.869: INFO: Waiting up to 5m0s for pod "with-tolerations" in namespace "sched-priority-561" to be "running" May 5 01:08:43.918: INFO: Pod "with-tolerations": Phase="Pending", Reason="", readiness=false. Elapsed: 48.560481ms May 5 01:08:45.923: INFO: Pod "with-tolerations": Phase="Running", Reason="", readiness=true. Elapsed: 2.053744729s May 5 01:08:45.923: INFO: Pod "with-tolerations" satisfied condition "running" STEP: Pod should prefer scheduled to the node that pod can tolerate. 05/05/23 01:08:45.923 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4dea0131-5a29-4945-89ba=testing-taint-value-9f4fd6d4-4d58-452a-995d-b43960726cef:PreferNoSchedule 05/05/23 01:08:45.943 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c8f4ac4a-8ced-4b5b-8825=testing-taint-value-a7adbaf6-f7cf-40f5-9845-be05e13a9b87:PreferNoSchedule 05/05/23 01:08:45.962 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-a7fb6b8b-4050-4e8c-beb7=testing-taint-value-d67f1cf3-828d-4662-8110-c57a021f8df7:PreferNoSchedule 05/05/23 01:08:45.98 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c00a5931-0daf-4756-9e76=testing-taint-value-78c136d5-382b-4b85-942f-cc4e44a0fdd5:PreferNoSchedule 05/05/23 01:08:45.999 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-bf7b5e95-6331-4de5-9fed=testing-taint-value-902c3dfb-201e-4af5-bc69-b836e1d14bda:PreferNoSchedule 05/05/23 01:08:46.018 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-9cbdb06b-0bbc-43e7-a0f5=testing-taint-value-47289a54-a06e-46c0-af88-4cfcaada1fc2:PreferNoSchedule 05/05/23 01:08:46.037 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-d4ee2fac-a7d9-4fdd-89ed=testing-taint-value-87837a9f-5e96-4995-bbe4-3a42c89b4282:PreferNoSchedule 05/05/23 01:08:46.055 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-f8846f3b-d14a-454e-b770=testing-taint-value-0ea553a0-fcc4-4e18-bb0a-9b709a43f5d4:PreferNoSchedule 05/05/23 01:08:46.074 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-cd229246-08e9-4acd-a0a1=testing-taint-value-722c1343-8518-4d1e-af82-03d884f1a130:PreferNoSchedule 05/05/23 01:08:46.092 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-87636e32-6d0f-40c3-9108=testing-taint-value-29361f30-fa3b-4088-9461-bcaf63498256:PreferNoSchedule 05/05/23 01:08:46.11 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4a5fd346-199b-45d5-9ebd=testing-taint-value-18ec07c3-e7f0-4326-ae15-58cb2d7595d3:PreferNoSchedule 05/05/23 01:08:46.129 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-d8ac07f4-61be-4000-b249=testing-taint-value-53cc9ea2-0d8d-4d0a-98f8-bdd447040249:PreferNoSchedule 05/05/23 01:08:46.149 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-d04049f9-9087-4304-a003=testing-taint-value-1f90df3f-23bf-42ff-988b-f6114e1597b5:PreferNoSchedule 05/05/23 01:08:46.167 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-acb12324-fe71-42e5-b7c5=testing-taint-value-ce4932de-064e-44ad-b6bc-3918b2039489:PreferNoSchedule 05/05/23 01:08:46.186 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-d358877f-1117-4212-a8e9=testing-taint-value-23a03bf8-fe4f-438f-baaa-8f0a887f3ef2:PreferNoSchedule 05/05/23 01:08:46.225 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-a31987d7-895f-4220-97a7=testing-taint-value-acd8060d-0b12-480e-ba0b-4fba51acdc99:PreferNoSchedule 05/05/23 01:08:46.375 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-38bb15ee-7b21-4759-933b=testing-taint-value-d1f51a2c-4c47-46df-b1fd-03c9b27d4654:PreferNoSchedule 05/05/23 01:08:46.525 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-44916ac3-b3be-406d-8fe9=testing-taint-value-2b96e793-0c9d-4f45-b7fe-26504b47f30d:PreferNoSchedule 05/05/23 01:08:46.675 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4c2ef62f-1632-4d90-8a67=testing-taint-value-84b24a82-393d-4ba5-aad3-5f77bad5dde7:PreferNoSchedule 05/05/23 01:08:46.825 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b0e73701-8b13-43f4-b252=testing-taint-value-a6c42aec-cd46-460b-84fa-472527b8d2e4:PreferNoSchedule 05/05/23 01:08:46.975 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:187 May 5 01:08:49.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-561" for this suite. 05/05/23 01:08:49.127 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","completed":11,"skipped":6596,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for"]} ------------------------------ • [SLOW TEST] [71.013 seconds] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate test/e2e/scheduling/priorities.go:208 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:07:38.12 May 5 01:07:38.120: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 05/05/23 01:07:38.121 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:07:38.131 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:07:38.136 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 May 5 01:07:38.140: INFO: Waiting up to 1m0s for all nodes to be ready May 5 01:08:38.171: INFO: Waiting for terminating namespaces to be deleted... May 5 01:08:38.174: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 5 01:08:38.187: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 5 01:08:38.187: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 5 01:08:38.194: INFO: ComputeCPUMemFraction for node: v125-worker May 5 01:08:38.194: INFO: Pod for on the node: create-loop-devs-9mv4v, Cpu: 100, Mem: 209715200 May 5 01:08:38.195: INFO: Pod for on the node: kindnet-m8hwr, Cpu: 100, Mem: 52428800 May 5 01:08:38.195: INFO: Pod for on the node: kube-proxy-kzswj, Cpu: 100, Mem: 209715200 May 5 01:08:38.195: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:08:38.195: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 May 5 01:08:38.195: INFO: ComputeCPUMemFraction for node: v125-worker2 May 5 01:08:38.195: INFO: Pod for on the node: create-loop-devs-cfx6b, Cpu: 100, Mem: 209715200 May 5 01:08:38.195: INFO: Pod for on the node: kindnet-4spxt, Cpu: 100, Mem: 52428800 May 5 01:08:38.195: INFO: Pod for on the node: kube-proxy-df52h, Cpu: 100, Mem: 209715200 May 5 01:08:38.195: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:08:38.195: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 [It] Pod should be preferably scheduled to nodes pod can tolerate test/e2e/scheduling/priorities.go:208 May 5 01:08:38.202: INFO: ComputeCPUMemFraction for node: v125-worker May 5 01:08:38.202: INFO: Pod for on the node: create-loop-devs-9mv4v, Cpu: 100, Mem: 209715200 May 5 01:08:38.202: INFO: Pod for on the node: kindnet-m8hwr, Cpu: 100, Mem: 52428800 May 5 01:08:38.202: INFO: Pod for on the node: kube-proxy-kzswj, Cpu: 100, Mem: 209715200 May 5 01:08:38.202: INFO: Node: v125-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:08:38.202: INFO: Node: v125-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 May 5 01:08:38.202: INFO: ComputeCPUMemFraction for node: v125-worker2 May 5 01:08:38.202: INFO: Pod for on the node: create-loop-devs-cfx6b, Cpu: 100, Mem: 209715200 May 5 01:08:38.202: INFO: Pod for on the node: kindnet-4spxt, Cpu: 100, Mem: 52428800 May 5 01:08:38.202: INFO: Pod for on the node: kube-proxy-df52h, Cpu: 100, Mem: 209715200 May 5 01:08:38.202: INFO: Node: v125-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 May 5 01:08:38.202: INFO: Node: v125-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 May 5 01:08:38.213: INFO: Waiting for running... May 5 01:08:38.213: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. 05/05/23 01:08:43.273 May 5 01:08:43.273: INFO: ComputeCPUMemFraction for node: v125-worker May 5 01:08:43.273: INFO: Pod for on the node: create-loop-devs-9mv4v, Cpu: 100, Mem: 209715200 May 5 01:08:43.273: INFO: Pod for on the node: kindnet-m8hwr, Cpu: 100, Mem: 52428800 May 5 01:08:43.273: INFO: Pod for on the node: kube-proxy-kzswj, Cpu: 100, Mem: 209715200 May 5 01:08:43.273: INFO: Pod for on the node: b924d38b-38d9-4791-a0d1-c21d952d1dce-0, Cpu: 43800, Mem: 33561339904 May 5 01:08:43.273: INFO: Node: v125-worker, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 May 5 01:08:43.273: INFO: Node: v125-worker, totalRequestedMemResource: 33718626304, memAllocatableVal: 67412086784, memFraction: 0.5001866566160504 STEP: Compute Cpu, Mem Fraction after create balanced pods. 05/05/23 01:08:43.273 May 5 01:08:43.273: INFO: ComputeCPUMemFraction for node: v125-worker2 May 5 01:08:43.273: INFO: Pod for on the node: create-loop-devs-cfx6b, Cpu: 100, Mem: 209715200 May 5 01:08:43.273: INFO: Pod for on the node: kindnet-4spxt, Cpu: 100, Mem: 52428800 May 5 01:08:43.273: INFO: Pod for on the node: kube-proxy-df52h, Cpu: 100, Mem: 209715200 May 5 01:08:43.273: INFO: Pod for on the node: da2f5eff-ca47-488f-9271-eb1bff19312e-0, Cpu: 43800, Mem: 33561339904 May 5 01:08:43.273: INFO: Node: v125-worker2, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 May 5 01:08:43.273: INFO: Node: v125-worker2, totalRequestedMemResource: 33718626304, memAllocatableVal: 67412086784, memFraction: 0.5001866566160504 STEP: Trying to apply 10 (tolerable) taints on the first node. 05/05/23 01:08:43.274 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4a5fd346-199b-45d5-9ebd=testing-taint-value-18ec07c3-e7f0-4326-ae15-58cb2d7595d3:PreferNoSchedule 05/05/23 01:08:43.289 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-d8ac07f4-61be-4000-b249=testing-taint-value-53cc9ea2-0d8d-4d0a-98f8-bdd447040249:PreferNoSchedule 05/05/23 01:08:43.308 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-d04049f9-9087-4304-a003=testing-taint-value-1f90df3f-23bf-42ff-988b-f6114e1597b5:PreferNoSchedule 05/05/23 01:08:43.331 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-acb12324-fe71-42e5-b7c5=testing-taint-value-ce4932de-064e-44ad-b6bc-3918b2039489:PreferNoSchedule 05/05/23 01:08:43.35 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-d358877f-1117-4212-a8e9=testing-taint-value-23a03bf8-fe4f-438f-baaa-8f0a887f3ef2:PreferNoSchedule 05/05/23 01:08:43.369 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-a31987d7-895f-4220-97a7=testing-taint-value-acd8060d-0b12-480e-ba0b-4fba51acdc99:PreferNoSchedule 05/05/23 01:08:43.387 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-38bb15ee-7b21-4759-933b=testing-taint-value-d1f51a2c-4c47-46df-b1fd-03c9b27d4654:PreferNoSchedule 05/05/23 01:08:43.406 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-44916ac3-b3be-406d-8fe9=testing-taint-value-2b96e793-0c9d-4f45-b7fe-26504b47f30d:PreferNoSchedule 05/05/23 01:08:43.425 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4c2ef62f-1632-4d90-8a67=testing-taint-value-84b24a82-393d-4ba5-aad3-5f77bad5dde7:PreferNoSchedule 05/05/23 01:08:43.444 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b0e73701-8b13-43f4-b252=testing-taint-value-a6c42aec-cd46-460b-84fa-472527b8d2e4:PreferNoSchedule 05/05/23 01:08:43.463 STEP: Adding 10 intolerable taints to all other nodes 05/05/23 01:08:43.466 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4dea0131-5a29-4945-89ba=testing-taint-value-9f4fd6d4-4d58-452a-995d-b43960726cef:PreferNoSchedule 05/05/23 01:08:43.48 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c8f4ac4a-8ced-4b5b-8825=testing-taint-value-a7adbaf6-f7cf-40f5-9845-be05e13a9b87:PreferNoSchedule 05/05/23 01:08:43.499 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-a7fb6b8b-4050-4e8c-beb7=testing-taint-value-d67f1cf3-828d-4662-8110-c57a021f8df7:PreferNoSchedule 05/05/23 01:08:43.517 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c00a5931-0daf-4756-9e76=testing-taint-value-78c136d5-382b-4b85-942f-cc4e44a0fdd5:PreferNoSchedule 05/05/23 01:08:43.535 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-bf7b5e95-6331-4de5-9fed=testing-taint-value-902c3dfb-201e-4af5-bc69-b836e1d14bda:PreferNoSchedule 05/05/23 01:08:43.554 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-9cbdb06b-0bbc-43e7-a0f5=testing-taint-value-47289a54-a06e-46c0-af88-4cfcaada1fc2:PreferNoSchedule 05/05/23 01:08:43.572 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-d4ee2fac-a7d9-4fdd-89ed=testing-taint-value-87837a9f-5e96-4995-bbe4-3a42c89b4282:PreferNoSchedule 05/05/23 01:08:43.594 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-f8846f3b-d14a-454e-b770=testing-taint-value-0ea553a0-fcc4-4e18-bb0a-9b709a43f5d4:PreferNoSchedule 05/05/23 01:08:43.613 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-cd229246-08e9-4acd-a0a1=testing-taint-value-722c1343-8518-4d1e-af82-03d884f1a130:PreferNoSchedule 05/05/23 01:08:43.632 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-87636e32-6d0f-40c3-9108=testing-taint-value-29361f30-fa3b-4088-9461-bcaf63498256:PreferNoSchedule 05/05/23 01:08:43.777 STEP: Create a pod that tolerates all the taints of the first node. 05/05/23 01:08:43.818 May 5 01:08:43.869: INFO: Waiting up to 5m0s for pod "with-tolerations" in namespace "sched-priority-561" to be "running" May 5 01:08:43.918: INFO: Pod "with-tolerations": Phase="Pending", Reason="", readiness=false. Elapsed: 48.560481ms May 5 01:08:45.923: INFO: Pod "with-tolerations": Phase="Running", Reason="", readiness=true. Elapsed: 2.053744729s May 5 01:08:45.923: INFO: Pod "with-tolerations" satisfied condition "running" STEP: Pod should prefer scheduled to the node that pod can tolerate. 05/05/23 01:08:45.923 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4dea0131-5a29-4945-89ba=testing-taint-value-9f4fd6d4-4d58-452a-995d-b43960726cef:PreferNoSchedule 05/05/23 01:08:45.943 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c8f4ac4a-8ced-4b5b-8825=testing-taint-value-a7adbaf6-f7cf-40f5-9845-be05e13a9b87:PreferNoSchedule 05/05/23 01:08:45.962 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-a7fb6b8b-4050-4e8c-beb7=testing-taint-value-d67f1cf3-828d-4662-8110-c57a021f8df7:PreferNoSchedule 05/05/23 01:08:45.98 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c00a5931-0daf-4756-9e76=testing-taint-value-78c136d5-382b-4b85-942f-cc4e44a0fdd5:PreferNoSchedule 05/05/23 01:08:45.999 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-bf7b5e95-6331-4de5-9fed=testing-taint-value-902c3dfb-201e-4af5-bc69-b836e1d14bda:PreferNoSchedule 05/05/23 01:08:46.018 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-9cbdb06b-0bbc-43e7-a0f5=testing-taint-value-47289a54-a06e-46c0-af88-4cfcaada1fc2:PreferNoSchedule 05/05/23 01:08:46.037 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-d4ee2fac-a7d9-4fdd-89ed=testing-taint-value-87837a9f-5e96-4995-bbe4-3a42c89b4282:PreferNoSchedule 05/05/23 01:08:46.055 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-f8846f3b-d14a-454e-b770=testing-taint-value-0ea553a0-fcc4-4e18-bb0a-9b709a43f5d4:PreferNoSchedule 05/05/23 01:08:46.074 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-cd229246-08e9-4acd-a0a1=testing-taint-value-722c1343-8518-4d1e-af82-03d884f1a130:PreferNoSchedule 05/05/23 01:08:46.092 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-87636e32-6d0f-40c3-9108=testing-taint-value-29361f30-fa3b-4088-9461-bcaf63498256:PreferNoSchedule 05/05/23 01:08:46.11 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4a5fd346-199b-45d5-9ebd=testing-taint-value-18ec07c3-e7f0-4326-ae15-58cb2d7595d3:PreferNoSchedule 05/05/23 01:08:46.129 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-d8ac07f4-61be-4000-b249=testing-taint-value-53cc9ea2-0d8d-4d0a-98f8-bdd447040249:PreferNoSchedule 05/05/23 01:08:46.149 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-d04049f9-9087-4304-a003=testing-taint-value-1f90df3f-23bf-42ff-988b-f6114e1597b5:PreferNoSchedule 05/05/23 01:08:46.167 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-acb12324-fe71-42e5-b7c5=testing-taint-value-ce4932de-064e-44ad-b6bc-3918b2039489:PreferNoSchedule 05/05/23 01:08:46.186 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-d358877f-1117-4212-a8e9=testing-taint-value-23a03bf8-fe4f-438f-baaa-8f0a887f3ef2:PreferNoSchedule 05/05/23 01:08:46.225 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-a31987d7-895f-4220-97a7=testing-taint-value-acd8060d-0b12-480e-ba0b-4fba51acdc99:PreferNoSchedule 05/05/23 01:08:46.375 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-38bb15ee-7b21-4759-933b=testing-taint-value-d1f51a2c-4c47-46df-b1fd-03c9b27d4654:PreferNoSchedule 05/05/23 01:08:46.525 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-44916ac3-b3be-406d-8fe9=testing-taint-value-2b96e793-0c9d-4f45-b7fe-26504b47f30d:PreferNoSchedule 05/05/23 01:08:46.675 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4c2ef62f-1632-4d90-8a67=testing-taint-value-84b24a82-393d-4ba5-aad3-5f77bad5dde7:PreferNoSchedule 05/05/23 01:08:46.825 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b0e73701-8b13-43f4-b252=testing-taint-value-a6c42aec-cd46-460b-84fa-472527b8d2e4:PreferNoSchedule 05/05/23 01:08:46.975 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/framework.go:187 May 5 01:08:49.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-561" for this suite. 05/05/23 01:08:49.127 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones [Serial] test/e2e/scheduling/ubernetes_lite.go:81 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:08:49.148 May 5 01:08:49.148: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename multi-az 05/05/23 01:08:49.15 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:08:49.161 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:08:49.164 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:51 STEP: Checking for multi-zone cluster. Schedulable zone count = 0 05/05/23 01:08:49.172 May 5 01:08:49.172: INFO: Schedulable zone count is 0, only run for multi-zone clusters, skipping test [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/framework.go:187 May 5 01:08:49.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-7294" for this suite. 05/05/23 01:08:49.177 [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:72 ------------------------------ S [SKIPPED] [0.033 seconds] [sig-scheduling] Multi-AZ Clusters [BeforeEach] test/e2e/scheduling/ubernetes_lite.go:51 should spread the pods of a replication controller across zones [Serial] test/e2e/scheduling/ubernetes_lite.go:81 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 05/05/23 01:08:49.148 May 5 01:08:49.148: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename multi-az 05/05/23 01:08:49.15 STEP: Waiting for a default service account to be provisioned in namespace 05/05/23 01:08:49.161 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/05/23 01:08:49.164 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:51 STEP: Checking for multi-zone cluster. Schedulable zone count = 0 05/05/23 01:08:49.172 May 5 01:08:49.172: INFO: Schedulable zone count is 0, only run for multi-zone clusters, skipping test [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/framework.go:187 May 5 01:08:49.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-7294" for this suite. 05/05/23 01:08:49.177 [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:72 << End Captured GinkgoWriter Output Schedulable zone count is 0, only run for multi-zone clusters, skipping test In [BeforeEach] at: test/e2e/scheduling/ubernetes_lite.go:61 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [SynchronizedAfterSuite] test/e2e/e2e.go:87 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:87 {"msg":"Test Suite completed","completed":11,"skipped":7054,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for"]} May 5 01:08:49.203: INFO: Running AfterSuite actions on all nodes May 5 01:08:49.203: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func20.2 May 5 01:08:49.203: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func10.2 May 5 01:08:49.203: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 May 5 01:08:49.203: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 May 5 01:08:49.203: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 May 5 01:08:49.203: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 May 5 01:08:49.203: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:87 May 5 01:08:49.203: INFO: Running AfterSuite actions on node 1 May 5 01:08:49.203: INFO: Skipping dumping logs from cluster ------------------------------ [SynchronizedAfterSuite] PASSED [0.000 seconds] [SynchronizedAfterSuite] test/e2e/e2e.go:87 Begin Captured GinkgoWriter Output >> [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:87 May 5 01:08:49.203: INFO: Running AfterSuite actions on all nodes May 5 01:08:49.203: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func20.2 May 5 01:08:49.203: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func10.2 May 5 01:08:49.203: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 May 5 01:08:49.203: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 May 5 01:08:49.203: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 May 5 01:08:49.203: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 May 5 01:08:49.203: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:87 May 5 01:08:49.203: INFO: Running AfterSuite actions on node 1 May 5 01:08:49.203: INFO: Skipping dumping logs from cluster << End Captured GinkgoWriter Output ------------------------------ [ReportAfterSuite] Kubernetes e2e suite report test/e2e/e2e_test.go:146 [ReportAfterSuite] TOP-LEVEL test/e2e/e2e_test.go:146 ------------------------------ [ReportAfterSuite] PASSED [0.000 seconds] [ReportAfterSuite] Kubernetes e2e suite report test/e2e/e2e_test.go:146 Begin Captured GinkgoWriter Output >> [ReportAfterSuite] TOP-LEVEL test/e2e/e2e_test.go:146 << End Captured GinkgoWriter Output ------------------------------ [ReportAfterSuite] Kubernetes e2e JUnit report test/e2e/framework/test_context.go:559 [ReportAfterSuite] TOP-LEVEL test/e2e/framework/test_context.go:559 ------------------------------ [ReportAfterSuite] PASSED [0.109 seconds] [ReportAfterSuite] Kubernetes e2e JUnit report test/e2e/framework/test_context.go:559 Begin Captured GinkgoWriter Output >> [ReportAfterSuite] TOP-LEVEL test/e2e/framework/test_context.go:559 << End Captured GinkgoWriter Output ------------------------------ Summarizing 1 Failure: [FAIL] [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run [BeforeEach] verify pod overhead is accounted for test/e2e/scheduling/predicates.go:248 Ran 12 of 7066 Specs in 355.888 seconds FAIL! -- 11 Passed | 1 Failed | 0 Pending | 7054 Skipped --- FAIL: TestE2E (356.15s) FAIL Ginkgo ran 1 suite in 5m56.271988033s Test Suite Failed