I0411 18:01:42.271708 16 e2e.go:126] Starting e2e run "cea7a75a-3567-4576-9b73-fda38ab26bef" on Ginkgo node 1 Apr 11 18:01:42.287: INFO: Enabling in-tree volume drivers Running Suite: Kubernetes e2e suite - /usr/local/bin ==================================================== Random Seed: 1712858502 - will randomize all specs Will run 15 of 7069 specs ------------------------------ [SynchronizedBeforeSuite] test/e2e/e2e.go:77 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 11 18:01:42.436: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:01:42.438: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 11 18:01:42.461: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 11 18:01:42.490: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 11 18:01:42.490: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 11 18:01:42.490: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 11 18:01:42.496: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Apr 11 18:01:42.496: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 11 18:01:42.496: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 11 18:01:42.496: INFO: e2e test version: v1.26.13 Apr 11 18:01:42.497: INFO: kube-apiserver version: v1.26.6 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 11 18:01:42.498: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:01:42.503: INFO: Cluster IP family: ipv4 ------------------------------ [SynchronizedBeforeSuite] PASSED [0.067 seconds] [SynchronizedBeforeSuite] test/e2e/e2e.go:77 Begin Captured GinkgoWriter Output >> [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 11 18:01:42.436: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:01:42.438: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 11 18:01:42.461: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 11 18:01:42.490: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 11 18:01:42.490: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 11 18:01:42.490: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 11 18:01:42.496: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Apr 11 18:01:42.496: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 11 18:01:42.496: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 11 18:01:42.496: INFO: e2e test version: v1.26.13 Apr 11 18:01:42.497: INFO: kube-apiserver version: v1.26.6 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 11 18:01:42.498: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 11 18:01:42.503: INFO: Cluster IP family: ipv4 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones [Serial] test/e2e/scheduling/ubernetes_lite.go:81 [BeforeEach] [sig-scheduling] Multi-AZ Clusters set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:01:42.567 Apr 11 18:01:42.568: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename multi-az 04/11/24 18:01:42.569 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:01:42.592 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:01:42.596 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:51 STEP: Checking for multi-zone cluster. Schedulable zone count = 0 04/11/24 18:01:42.604 Apr 11 18:01:42.604: INFO: Schedulable zone count is 0, only run for multi-zone clusters, skipping test [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/node/init/init.go:32 Apr 11 18:01:42.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:72 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters tear down framework | framework.go:193 STEP: Destroying namespace "multi-az-3527" for this suite. 04/11/24 18:01:42.61 ------------------------------ S [SKIPPED] [0.047 seconds] [sig-scheduling] Multi-AZ Clusters [BeforeEach] test/e2e/scheduling/ubernetes_lite.go:51 should spread the pods of a replication controller across zones [Serial] test/e2e/scheduling/ubernetes_lite.go:81 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] Multi-AZ Clusters set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:01:42.567 Apr 11 18:01:42.568: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename multi-az 04/11/24 18:01:42.569 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:01:42.592 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:01:42.596 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:51 STEP: Checking for multi-zone cluster. Schedulable zone count = 0 04/11/24 18:01:42.604 Apr 11 18:01:42.604: INFO: Schedulable zone count is 0, only run for multi-zone clusters, skipping test [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/node/init/init.go:32 Apr 11 18:01:42.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:72 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters tear down framework | framework.go:193 STEP: Destroying namespace "multi-az-3527" for this suite. 04/11/24 18:01:42.61 << End Captured GinkgoWriter Output Schedulable zone count is 0, only run for multi-zone clusters, skipping test In [BeforeEach] at: test/e2e/scheduling/ubernetes_lite.go:61 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching test/e2e/scheduling/predicates.go:539 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:01:42.633 Apr 11 18:01:42.633: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 18:01:42.635 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:01:42.645 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:01:42.649 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 18:01:42.653: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 18:01:42.662: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:01:42.666: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 18:01:42.672: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 18:01:42.672: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:01:42.672: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:01:42.672: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:01:42.672: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:01:42.672: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 18:01:42.672: INFO: back-off-cap from pods-1857 started at 2024-04-11 17:34:32 +0000 UTC (1 container statuses recorded) Apr 11 18:01:42.672: INFO: Container back-off-cap ready: false, restart count 10 [It] validates that required NodeAffinity setting is respected if matching test/e2e/scheduling/predicates.go:539 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/11/24 18:01:42.672 Apr 11 18:01:42.680: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-3512" to be "running" Apr 11 18:01:42.683: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.128726ms Apr 11 18:01:44.688: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007845226s Apr 11 18:01:44.688: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/11/24 18:01:44.691 STEP: Trying to apply a random label on the found node. 04/11/24 18:01:44.702 STEP: verifying the node has the label kubernetes.io/e2e-0ee913e1-c1dc-4c36-982e-422c21c094e1 42 04/11/24 18:01:44.716 STEP: Trying to relaunch the pod, now with labels. 04/11/24 18:01:44.72 Apr 11 18:01:44.725: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-3512" to be "not pending" Apr 11 18:01:44.728: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 3.60878ms Apr 11 18:01:46.732: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 2.007311087s Apr 11 18:01:46.732: INFO: Pod "with-labels" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-0ee913e1-c1dc-4c36-982e-422c21c094e1 off the node v126-worker2 04/11/24 18:01:46.736 STEP: verifying the node doesn't have the label kubernetes.io/e2e-0ee913e1-c1dc-4c36-982e-422c21c094e1 04/11/24 18:01:46.758 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:01:46.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-3512" for this suite. 04/11/24 18:01:46.767 ------------------------------ • [4.139 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching test/e2e/scheduling/predicates.go:539 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:01:42.633 Apr 11 18:01:42.633: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 18:01:42.635 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:01:42.645 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:01:42.649 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 18:01:42.653: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 18:01:42.662: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:01:42.666: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 18:01:42.672: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 18:01:42.672: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:01:42.672: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:01:42.672: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:01:42.672: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:01:42.672: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 18:01:42.672: INFO: back-off-cap from pods-1857 started at 2024-04-11 17:34:32 +0000 UTC (1 container statuses recorded) Apr 11 18:01:42.672: INFO: Container back-off-cap ready: false, restart count 10 [It] validates that required NodeAffinity setting is respected if matching test/e2e/scheduling/predicates.go:539 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/11/24 18:01:42.672 Apr 11 18:01:42.680: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-3512" to be "running" Apr 11 18:01:42.683: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.128726ms Apr 11 18:01:44.688: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007845226s Apr 11 18:01:44.688: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/11/24 18:01:44.691 STEP: Trying to apply a random label on the found node. 04/11/24 18:01:44.702 STEP: verifying the node has the label kubernetes.io/e2e-0ee913e1-c1dc-4c36-982e-422c21c094e1 42 04/11/24 18:01:44.716 STEP: Trying to relaunch the pod, now with labels. 04/11/24 18:01:44.72 Apr 11 18:01:44.725: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-3512" to be "not pending" Apr 11 18:01:44.728: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 3.60878ms Apr 11 18:01:46.732: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 2.007311087s Apr 11 18:01:46.732: INFO: Pod "with-labels" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-0ee913e1-c1dc-4c36-982e-422c21c094e1 off the node v126-worker2 04/11/24 18:01:46.736 STEP: verifying the node doesn't have the label kubernetes.io/e2e-0ee913e1-c1dc-4c36-982e-422c21c094e1 04/11/24 18:01:46.758 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:01:46.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-3512" for this suite. 04/11/24 18:01:46.767 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Serial] test/e2e/scheduling/ubernetes_lite.go:77 [BeforeEach] [sig-scheduling] Multi-AZ Clusters set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:01:46.783 Apr 11 18:01:46.783: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename multi-az 04/11/24 18:01:46.784 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:01:46.794 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:01:46.798 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:51 STEP: Checking for multi-zone cluster. Schedulable zone count = 0 04/11/24 18:01:46.805 Apr 11 18:01:46.805: INFO: Schedulable zone count is 0, only run for multi-zone clusters, skipping test [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/node/init/init.go:32 Apr 11 18:01:46.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:72 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters tear down framework | framework.go:193 STEP: Destroying namespace "multi-az-9632" for this suite. 04/11/24 18:01:46.81 ------------------------------ S [SKIPPED] [0.031 seconds] [sig-scheduling] Multi-AZ Clusters [BeforeEach] test/e2e/scheduling/ubernetes_lite.go:51 should spread the pods of a service across zones [Serial] test/e2e/scheduling/ubernetes_lite.go:77 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] Multi-AZ Clusters set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:01:46.783 Apr 11 18:01:46.783: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename multi-az 04/11/24 18:01:46.784 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:01:46.794 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:01:46.798 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:51 STEP: Checking for multi-zone cluster. Schedulable zone count = 0 04/11/24 18:01:46.805 Apr 11 18:01:46.805: INFO: Schedulable zone count is 0, only run for multi-zone clusters, skipping test [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/node/init/init.go:32 Apr 11 18:01:46.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] Multi-AZ Clusters test/e2e/scheduling/ubernetes_lite.go:72 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] Multi-AZ Clusters tear down framework | framework.go:193 STEP: Destroying namespace "multi-az-9632" for this suite. 04/11/24 18:01:46.81 << End Captured GinkgoWriter Output Schedulable zone count is 0, only run for multi-zone clusters, skipping test In [BeforeEach] at: test/e2e/scheduling/ubernetes_lite.go:61 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates pod disruption condition is added to the preempted pod test/e2e/scheduling/preemption.go:327 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:01:46.819 Apr 11 18:01:46.819: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/11/24 18:01:46.821 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:01:46.829 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:01:46.832 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 11 18:01:46.845: INFO: Waiting up to 1m0s for all nodes to be ready Apr 11 18:02:46.872: INFO: Waiting for terminating namespaces to be deleted... [It] validates pod disruption condition is added to the preempted pod test/e2e/scheduling/preemption.go:327 STEP: Select a node to run the lower and higher priority pods 04/11/24 18:02:46.875 STEP: Create a low priority pod that consumes 1/1 of node resources 04/11/24 18:02:46.891 Apr 11 18:02:46.903: INFO: Created pod: victim-pod STEP: Wait for the victim pod to be scheduled 04/11/24 18:02:46.903 Apr 11 18:02:46.903: INFO: Waiting up to 5m0s for pod "victim-pod" in namespace "sched-preemption-738" to be "running" Apr 11 18:02:46.906: INFO: Pod "victim-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.017766ms Apr 11 18:02:48.911: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.007497373s Apr 11 18:02:48.911: INFO: Pod "victim-pod" satisfied condition "running" STEP: Create a high priority pod to trigger preemption of the lower priority pod 04/11/24 18:02:48.911 Apr 11 18:02:48.917: INFO: Created pod: preemptor-pod STEP: Waiting for the victim pod to be terminating 04/11/24 18:02:48.917 Apr 11 18:02:48.917: INFO: Waiting up to 5m0s for pod "victim-pod" in namespace "sched-preemption-738" to be "is terminating" Apr 11 18:02:48.921: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3.586401ms Apr 11 18:02:50.926: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008401967s Apr 11 18:02:50.926: INFO: Pod "victim-pod" satisfied condition "is terminating" STEP: Verifying the pod has the pod disruption condition 04/11/24 18:02:50.926 Apr 11 18:02:50.929: INFO: Removing pod's "victim-pod" finalizer: "example.com/test-finalizer" Apr 11 18:02:51.444: INFO: Successfully updated pod "victim-pod" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:02:51.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-738" for this suite. 04/11/24 18:02:51.478 ------------------------------ • [SLOW TEST] [64.664 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 validates pod disruption condition is added to the preempted pod test/e2e/scheduling/preemption.go:327 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:01:46.819 Apr 11 18:01:46.819: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/11/24 18:01:46.821 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:01:46.829 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:01:46.832 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 11 18:01:46.845: INFO: Waiting up to 1m0s for all nodes to be ready Apr 11 18:02:46.872: INFO: Waiting for terminating namespaces to be deleted... [It] validates pod disruption condition is added to the preempted pod test/e2e/scheduling/preemption.go:327 STEP: Select a node to run the lower and higher priority pods 04/11/24 18:02:46.875 STEP: Create a low priority pod that consumes 1/1 of node resources 04/11/24 18:02:46.891 Apr 11 18:02:46.903: INFO: Created pod: victim-pod STEP: Wait for the victim pod to be scheduled 04/11/24 18:02:46.903 Apr 11 18:02:46.903: INFO: Waiting up to 5m0s for pod "victim-pod" in namespace "sched-preemption-738" to be "running" Apr 11 18:02:46.906: INFO: Pod "victim-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.017766ms Apr 11 18:02:48.911: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.007497373s Apr 11 18:02:48.911: INFO: Pod "victim-pod" satisfied condition "running" STEP: Create a high priority pod to trigger preemption of the lower priority pod 04/11/24 18:02:48.911 Apr 11 18:02:48.917: INFO: Created pod: preemptor-pod STEP: Waiting for the victim pod to be terminating 04/11/24 18:02:48.917 Apr 11 18:02:48.917: INFO: Waiting up to 5m0s for pod "victim-pod" in namespace "sched-preemption-738" to be "is terminating" Apr 11 18:02:48.921: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3.586401ms Apr 11 18:02:50.926: INFO: Pod "victim-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008401967s Apr 11 18:02:50.926: INFO: Pod "victim-pod" satisfied condition "is terminating" STEP: Verifying the pod has the pod disruption condition 04/11/24 18:02:50.926 Apr 11 18:02:50.929: INFO: Removing pod's "victim-pod" finalizer: "example.com/test-finalizer" Apr 11 18:02:51.444: INFO: Successfully updated pod "victim-pod" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:02:51.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-738" for this suite. 04/11/24 18:02:51.478 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted test/e2e/scheduling/preemption.go:434 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:02:51.525 Apr 11 18:02:51.525: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/11/24 18:02:51.527 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:02:51.538 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:02:51.542 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 11 18:02:51.557: INFO: Waiting up to 1m0s for all nodes to be ready Apr 11 18:03:51.583: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:399 [AfterEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:421 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:03:51.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-5356" for this suite. 04/11/24 18:03:51.619 ------------------------------ S [SKIPPED] [60.099 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption [BeforeEach] test/e2e/scheduling/preemption.go:399 validates proper pods are preempted test/e2e/scheduling/preemption.go:434 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:02:51.525 Apr 11 18:02:51.525: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/11/24 18:02:51.527 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:02:51.538 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:02:51.542 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 11 18:02:51.557: INFO: Waiting up to 1m0s for all nodes to be ready Apr 11 18:03:51.583: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:399 [AfterEach] PodTopologySpread Preemption test/e2e/scheduling/preemption.go:421 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:03:51.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-5356" for this suite. 04/11/24 18:03:51.619 << End Captured GinkgoWriter Output At least 2 nodes are required to run the test In [BeforeEach] at: test/e2e/scheduling/preemption.go:401 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching test/e2e/scheduling/predicates.go:587 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:03:51.636 Apr 11 18:03:51.636: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 18:03:51.637 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:03:51.648 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:03:51.652 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 18:03:51.656: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 18:03:51.665: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:03:51.668: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 18:03:51.673: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 18:03:51.673: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:03:51.673: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:03:51.674: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:03:51.674: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:03:51.674: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching test/e2e/scheduling/predicates.go:587 STEP: Trying to launch a pod without a toleration to get a node which can launch it. 04/11/24 18:03:51.674 Apr 11 18:03:51.681: INFO: Waiting up to 1m0s for pod "without-toleration" in namespace "sched-pred-1382" to be "running" Apr 11 18:03:51.684: INFO: Pod "without-toleration": Phase="Pending", Reason="", readiness=false. Elapsed: 2.874637ms Apr 11 18:03:53.688: INFO: Pod "without-toleration": Phase="Running", Reason="", readiness=true. Elapsed: 2.006772709s Apr 11 18:03:53.688: INFO: Pod "without-toleration" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/11/24 18:03:53.691 STEP: Trying to apply a random taint on the found node. 04/11/24 18:03:53.7 STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-c1861943-ef9d-4e4a-a98e-2ec95040b1eb=testing-taint-value:NoSchedule 04/11/24 18:03:53.717 STEP: Trying to apply a random label on the found node. 04/11/24 18:03:53.72 STEP: verifying the node has the label kubernetes.io/e2e-label-key-64f94207-7aac-4a6c-a36e-bb390deb777f testing-label-value 04/11/24 18:03:53.735 STEP: Trying to relaunch the pod, now with tolerations. 04/11/24 18:03:53.738 Apr 11 18:03:53.744: INFO: Waiting up to 5m0s for pod "with-tolerations" in namespace "sched-pred-1382" to be "not pending" Apr 11 18:03:53.746: INFO: Pod "with-tolerations": Phase="Pending", Reason="", readiness=false. Elapsed: 2.727664ms Apr 11 18:03:55.751: INFO: Pod "with-tolerations": Phase="Running", Reason="", readiness=true. Elapsed: 2.007011964s Apr 11 18:03:55.751: INFO: Pod "with-tolerations" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-label-key-64f94207-7aac-4a6c-a36e-bb390deb777f off the node v126-worker2 04/11/24 18:03:55.754 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-64f94207-7aac-4a6c-a36e-bb390deb777f 04/11/24 18:03:55.77 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-c1861943-ef9d-4e4a-a98e-2ec95040b1eb=testing-taint-value:NoSchedule 04/11/24 18:03:55.786 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:03:55.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-1382" for this suite. 04/11/24 18:03:55.794 ------------------------------ • [4.164 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching test/e2e/scheduling/predicates.go:587 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:03:51.636 Apr 11 18:03:51.636: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 18:03:51.637 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:03:51.648 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:03:51.652 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 18:03:51.656: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 18:03:51.665: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:03:51.668: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 18:03:51.673: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 18:03:51.673: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:03:51.673: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:03:51.674: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:03:51.674: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:03:51.674: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching test/e2e/scheduling/predicates.go:587 STEP: Trying to launch a pod without a toleration to get a node which can launch it. 04/11/24 18:03:51.674 Apr 11 18:03:51.681: INFO: Waiting up to 1m0s for pod "without-toleration" in namespace "sched-pred-1382" to be "running" Apr 11 18:03:51.684: INFO: Pod "without-toleration": Phase="Pending", Reason="", readiness=false. Elapsed: 2.874637ms Apr 11 18:03:53.688: INFO: Pod "without-toleration": Phase="Running", Reason="", readiness=true. Elapsed: 2.006772709s Apr 11 18:03:53.688: INFO: Pod "without-toleration" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/11/24 18:03:53.691 STEP: Trying to apply a random taint on the found node. 04/11/24 18:03:53.7 STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-c1861943-ef9d-4e4a-a98e-2ec95040b1eb=testing-taint-value:NoSchedule 04/11/24 18:03:53.717 STEP: Trying to apply a random label on the found node. 04/11/24 18:03:53.72 STEP: verifying the node has the label kubernetes.io/e2e-label-key-64f94207-7aac-4a6c-a36e-bb390deb777f testing-label-value 04/11/24 18:03:53.735 STEP: Trying to relaunch the pod, now with tolerations. 04/11/24 18:03:53.738 Apr 11 18:03:53.744: INFO: Waiting up to 5m0s for pod "with-tolerations" in namespace "sched-pred-1382" to be "not pending" Apr 11 18:03:53.746: INFO: Pod "with-tolerations": Phase="Pending", Reason="", readiness=false. Elapsed: 2.727664ms Apr 11 18:03:55.751: INFO: Pod "with-tolerations": Phase="Running", Reason="", readiness=true. Elapsed: 2.007011964s Apr 11 18:03:55.751: INFO: Pod "with-tolerations" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-label-key-64f94207-7aac-4a6c-a36e-bb390deb777f off the node v126-worker2 04/11/24 18:03:55.754 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-64f94207-7aac-4a6c-a36e-bb390deb777f 04/11/24 18:03:55.77 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-c1861943-ef9d-4e4a-a98e-2ec95040b1eb=testing-taint-value:NoSchedule 04/11/24 18:03:55.786 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:03:55.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-1382" for this suite. 04/11/24 18:03:55.794 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate test/e2e/scheduling/priorities.go:208 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:03:55.823 Apr 11 18:03:55.823: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 04/11/24 18:03:55.825 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:03:55.835 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:03:55.838 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 Apr 11 18:03:55.841: INFO: Waiting up to 1m0s for all nodes to be ready Apr 11 18:04:55.866: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:04:55.869: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 11 18:04:55.883: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 11 18:04:55.883: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 11 18:04:55.890: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 11 18:04:55.890: INFO: Pod for on the node: create-loop-devs-tmv9n, Cpu: 100, Mem: 209715200 Apr 11 18:04:55.890: INFO: Pod for on the node: kindnet-l6j8p, Cpu: 100, Mem: 52428800 Apr 11 18:04:55.890: INFO: Pod for on the node: kube-proxy-zhx9l, Cpu: 100, Mem: 209715200 Apr 11 18:04:55.890: INFO: Node: v126-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 11 18:04:55.890: INFO: Node: v126-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 [It] Pod should be preferably scheduled to nodes pod can tolerate test/e2e/scheduling/priorities.go:208 Apr 11 18:04:55.897: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 11 18:04:55.897: INFO: Pod for on the node: create-loop-devs-tmv9n, Cpu: 100, Mem: 209715200 Apr 11 18:04:55.897: INFO: Pod for on the node: kindnet-l6j8p, Cpu: 100, Mem: 52428800 Apr 11 18:04:55.897: INFO: Pod for on the node: kube-proxy-zhx9l, Cpu: 100, Mem: 209715200 Apr 11 18:04:55.897: INFO: Node: v126-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 11 18:04:55.897: INFO: Node: v126-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 Apr 11 18:04:55.909: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. 04/11/24 18:05:00.968 Apr 11 18:05:00.968: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 11 18:05:00.968: INFO: Pod for on the node: create-loop-devs-tmv9n, Cpu: 100, Mem: 209715200 Apr 11 18:05:00.968: INFO: Pod for on the node: kindnet-l6j8p, Cpu: 100, Mem: 52428800 Apr 11 18:05:00.968: INFO: Pod for on the node: kube-proxy-zhx9l, Cpu: 100, Mem: 209715200 Apr 11 18:05:00.968: INFO: Pod for on the node: f056a21e-d70a-43a5-a559-23bb20bfd64f-0, Cpu: 43800, Mem: 33561339904 Apr 11 18:05:00.968: INFO: Node: v126-worker2, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 Apr 11 18:05:00.968: INFO: Node: v126-worker2, totalRequestedMemResource: 33718626304, memAllocatableVal: 67412086784, memFraction: 0.5001866566160504 STEP: Trying to apply 10 (tolerable) taints on the first node. 04/11/24 18:05:00.968 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-14cb2462-d312-44d2-8721=testing-taint-value-69d8cc0d-d8bc-4d81-9574-a8943f0abad8:PreferNoSchedule 04/11/24 18:05:00.986 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-2f7119e5-b4b6-4ecd-955b=testing-taint-value-73db8f37-7e35-4835-a47a-dea5f1d2e833:PreferNoSchedule 04/11/24 18:05:01.007 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-246dd753-e58e-4369-8530=testing-taint-value-a4db14fe-0f6d-4405-8ee7-ee4376b5ec12:PreferNoSchedule 04/11/24 18:05:01.028 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-3a45699d-13a8-4af0-a29f=testing-taint-value-9024ddba-d6ad-4c7d-aff8-0bb04d325e3a:PreferNoSchedule 04/11/24 18:05:01.049 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b0b84149-8b69-40b3-88d9=testing-taint-value-e7538d52-bbfc-425b-80dc-35ff91d3de7d:PreferNoSchedule 04/11/24 18:05:01.07 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-dad80540-d911-47d5-8c24=testing-taint-value-d97b1e0e-9aa0-4c65-9b68-eabe9b89d878:PreferNoSchedule 04/11/24 18:05:01.092 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-33413723-be63-4da3-a13a=testing-taint-value-2f1c8f24-15cf-4fc0-959e-cf77d78e2056:PreferNoSchedule 04/11/24 18:05:01.113 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-9788bb81-fae9-4bb4-8b05=testing-taint-value-ab93fc50-a81e-484d-a323-71b174becd9f:PreferNoSchedule 04/11/24 18:05:01.134 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-5d6bc745-41c2-4439-91b8=testing-taint-value-41cd52df-cfdd-456d-b9ee-03430cea22d2:PreferNoSchedule 04/11/24 18:05:01.156 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4973609d-04b2-4dec-ad09=testing-taint-value-0b0fcdf0-0bb7-4a9c-9276-3a6c627b5fa3:PreferNoSchedule 04/11/24 18:05:01.177 STEP: Adding 10 intolerable taints to all other nodes 04/11/24 18:05:01.18 STEP: Create a pod that tolerates all the taints of the first node. 04/11/24 18:05:01.18 Apr 11 18:05:01.186: INFO: Waiting up to 5m0s for pod "with-tolerations" in namespace "sched-priority-3540" to be "running" Apr 11 18:05:01.189: INFO: Pod "with-tolerations": Phase="Pending", Reason="", readiness=false. Elapsed: 3.212793ms Apr 11 18:05:03.193: INFO: Pod "with-tolerations": Phase="Running", Reason="", readiness=true. Elapsed: 2.007024416s Apr 11 18:05:03.193: INFO: Pod "with-tolerations" satisfied condition "running" STEP: Pod should prefer scheduled to the node that pod can tolerate. 04/11/24 18:05:03.193 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-14cb2462-d312-44d2-8721=testing-taint-value-69d8cc0d-d8bc-4d81-9574-a8943f0abad8:PreferNoSchedule 04/11/24 18:05:03.216 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-2f7119e5-b4b6-4ecd-955b=testing-taint-value-73db8f37-7e35-4835-a47a-dea5f1d2e833:PreferNoSchedule 04/11/24 18:05:03.238 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-246dd753-e58e-4369-8530=testing-taint-value-a4db14fe-0f6d-4405-8ee7-ee4376b5ec12:PreferNoSchedule 04/11/24 18:05:03.259 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-3a45699d-13a8-4af0-a29f=testing-taint-value-9024ddba-d6ad-4c7d-aff8-0bb04d325e3a:PreferNoSchedule 04/11/24 18:05:03.28 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b0b84149-8b69-40b3-88d9=testing-taint-value-e7538d52-bbfc-425b-80dc-35ff91d3de7d:PreferNoSchedule 04/11/24 18:05:03.301 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-dad80540-d911-47d5-8c24=testing-taint-value-d97b1e0e-9aa0-4c65-9b68-eabe9b89d878:PreferNoSchedule 04/11/24 18:05:03.322 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-33413723-be63-4da3-a13a=testing-taint-value-2f1c8f24-15cf-4fc0-959e-cf77d78e2056:PreferNoSchedule 04/11/24 18:05:03.343 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-9788bb81-fae9-4bb4-8b05=testing-taint-value-ab93fc50-a81e-484d-a323-71b174becd9f:PreferNoSchedule 04/11/24 18:05:03.363 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-5d6bc745-41c2-4439-91b8=testing-taint-value-41cd52df-cfdd-456d-b9ee-03430cea22d2:PreferNoSchedule 04/11/24 18:05:03.383 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4973609d-04b2-4dec-ad09=testing-taint-value-0b0fcdf0-0bb7-4a9c-9276-3a6c627b5fa3:PreferNoSchedule 04/11/24 18:05:03.403 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:05:05.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-priority-3540" for this suite. 04/11/24 18:05:05.428 ------------------------------ • [SLOW TEST] [69.611 seconds] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate test/e2e/scheduling/priorities.go:208 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:03:55.823 Apr 11 18:03:55.823: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 04/11/24 18:03:55.825 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:03:55.835 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:03:55.838 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 Apr 11 18:03:55.841: INFO: Waiting up to 1m0s for all nodes to be ready Apr 11 18:04:55.866: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:04:55.869: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 11 18:04:55.883: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 11 18:04:55.883: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 11 18:04:55.890: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 11 18:04:55.890: INFO: Pod for on the node: create-loop-devs-tmv9n, Cpu: 100, Mem: 209715200 Apr 11 18:04:55.890: INFO: Pod for on the node: kindnet-l6j8p, Cpu: 100, Mem: 52428800 Apr 11 18:04:55.890: INFO: Pod for on the node: kube-proxy-zhx9l, Cpu: 100, Mem: 209715200 Apr 11 18:04:55.890: INFO: Node: v126-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 11 18:04:55.890: INFO: Node: v126-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 [It] Pod should be preferably scheduled to nodes pod can tolerate test/e2e/scheduling/priorities.go:208 Apr 11 18:04:55.897: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 11 18:04:55.897: INFO: Pod for on the node: create-loop-devs-tmv9n, Cpu: 100, Mem: 209715200 Apr 11 18:04:55.897: INFO: Pod for on the node: kindnet-l6j8p, Cpu: 100, Mem: 52428800 Apr 11 18:04:55.897: INFO: Pod for on the node: kube-proxy-zhx9l, Cpu: 100, Mem: 209715200 Apr 11 18:04:55.897: INFO: Node: v126-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 11 18:04:55.897: INFO: Node: v126-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 Apr 11 18:04:55.909: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. 04/11/24 18:05:00.968 Apr 11 18:05:00.968: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 11 18:05:00.968: INFO: Pod for on the node: create-loop-devs-tmv9n, Cpu: 100, Mem: 209715200 Apr 11 18:05:00.968: INFO: Pod for on the node: kindnet-l6j8p, Cpu: 100, Mem: 52428800 Apr 11 18:05:00.968: INFO: Pod for on the node: kube-proxy-zhx9l, Cpu: 100, Mem: 209715200 Apr 11 18:05:00.968: INFO: Pod for on the node: f056a21e-d70a-43a5-a559-23bb20bfd64f-0, Cpu: 43800, Mem: 33561339904 Apr 11 18:05:00.968: INFO: Node: v126-worker2, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 Apr 11 18:05:00.968: INFO: Node: v126-worker2, totalRequestedMemResource: 33718626304, memAllocatableVal: 67412086784, memFraction: 0.5001866566160504 STEP: Trying to apply 10 (tolerable) taints on the first node. 04/11/24 18:05:00.968 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-14cb2462-d312-44d2-8721=testing-taint-value-69d8cc0d-d8bc-4d81-9574-a8943f0abad8:PreferNoSchedule 04/11/24 18:05:00.986 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-2f7119e5-b4b6-4ecd-955b=testing-taint-value-73db8f37-7e35-4835-a47a-dea5f1d2e833:PreferNoSchedule 04/11/24 18:05:01.007 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-246dd753-e58e-4369-8530=testing-taint-value-a4db14fe-0f6d-4405-8ee7-ee4376b5ec12:PreferNoSchedule 04/11/24 18:05:01.028 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-3a45699d-13a8-4af0-a29f=testing-taint-value-9024ddba-d6ad-4c7d-aff8-0bb04d325e3a:PreferNoSchedule 04/11/24 18:05:01.049 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b0b84149-8b69-40b3-88d9=testing-taint-value-e7538d52-bbfc-425b-80dc-35ff91d3de7d:PreferNoSchedule 04/11/24 18:05:01.07 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-dad80540-d911-47d5-8c24=testing-taint-value-d97b1e0e-9aa0-4c65-9b68-eabe9b89d878:PreferNoSchedule 04/11/24 18:05:01.092 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-33413723-be63-4da3-a13a=testing-taint-value-2f1c8f24-15cf-4fc0-959e-cf77d78e2056:PreferNoSchedule 04/11/24 18:05:01.113 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-9788bb81-fae9-4bb4-8b05=testing-taint-value-ab93fc50-a81e-484d-a323-71b174becd9f:PreferNoSchedule 04/11/24 18:05:01.134 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-5d6bc745-41c2-4439-91b8=testing-taint-value-41cd52df-cfdd-456d-b9ee-03430cea22d2:PreferNoSchedule 04/11/24 18:05:01.156 STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4973609d-04b2-4dec-ad09=testing-taint-value-0b0fcdf0-0bb7-4a9c-9276-3a6c627b5fa3:PreferNoSchedule 04/11/24 18:05:01.177 STEP: Adding 10 intolerable taints to all other nodes 04/11/24 18:05:01.18 STEP: Create a pod that tolerates all the taints of the first node. 04/11/24 18:05:01.18 Apr 11 18:05:01.186: INFO: Waiting up to 5m0s for pod "with-tolerations" in namespace "sched-priority-3540" to be "running" Apr 11 18:05:01.189: INFO: Pod "with-tolerations": Phase="Pending", Reason="", readiness=false. Elapsed: 3.212793ms Apr 11 18:05:03.193: INFO: Pod "with-tolerations": Phase="Running", Reason="", readiness=true. Elapsed: 2.007024416s Apr 11 18:05:03.193: INFO: Pod "with-tolerations" satisfied condition "running" STEP: Pod should prefer scheduled to the node that pod can tolerate. 04/11/24 18:05:03.193 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-14cb2462-d312-44d2-8721=testing-taint-value-69d8cc0d-d8bc-4d81-9574-a8943f0abad8:PreferNoSchedule 04/11/24 18:05:03.216 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-2f7119e5-b4b6-4ecd-955b=testing-taint-value-73db8f37-7e35-4835-a47a-dea5f1d2e833:PreferNoSchedule 04/11/24 18:05:03.238 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-246dd753-e58e-4369-8530=testing-taint-value-a4db14fe-0f6d-4405-8ee7-ee4376b5ec12:PreferNoSchedule 04/11/24 18:05:03.259 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-3a45699d-13a8-4af0-a29f=testing-taint-value-9024ddba-d6ad-4c7d-aff8-0bb04d325e3a:PreferNoSchedule 04/11/24 18:05:03.28 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b0b84149-8b69-40b3-88d9=testing-taint-value-e7538d52-bbfc-425b-80dc-35ff91d3de7d:PreferNoSchedule 04/11/24 18:05:03.301 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-dad80540-d911-47d5-8c24=testing-taint-value-d97b1e0e-9aa0-4c65-9b68-eabe9b89d878:PreferNoSchedule 04/11/24 18:05:03.322 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-33413723-be63-4da3-a13a=testing-taint-value-2f1c8f24-15cf-4fc0-959e-cf77d78e2056:PreferNoSchedule 04/11/24 18:05:03.343 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-9788bb81-fae9-4bb4-8b05=testing-taint-value-ab93fc50-a81e-484d-a323-71b174becd9f:PreferNoSchedule 04/11/24 18:05:03.363 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-5d6bc745-41c2-4439-91b8=testing-taint-value-41cd52df-cfdd-456d-b9ee-03430cea22d2:PreferNoSchedule 04/11/24 18:05:03.383 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4973609d-04b2-4dec-ad09=testing-taint-value-0b0fcdf0-0bb7-4a9c-9276-3a6c627b5fa3:PreferNoSchedule 04/11/24 18:05:03.403 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:05:05.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-priority-3540" for this suite. 04/11/24 18:05:05.428 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] test/e2e/scheduling/predicates.go:127 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:05:05.498 Apr 11 18:05:05.499: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 18:05:05.503 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:05:05.515 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:05:05.519 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 18:05:05.523: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 18:05:05.531: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:05:05.534: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 18:05:05.540: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 18:05:05.541: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:05:05.541: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:05:05.541: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:05:05.541: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:05:05.541: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 18:05:05.541: INFO: with-tolerations from sched-priority-3540 started at 2024-04-11 18:05:01 +0000 UTC (1 container statuses recorded) Apr 11 18:05:05.541: INFO: Container with-tolerations ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] test/e2e/scheduling/predicates.go:127 Apr 11 18:05:05.555: INFO: Pod create-loop-devs-tmv9n requesting local ephemeral resource =0 on Node v126-worker2 Apr 11 18:05:05.555: INFO: Pod kindnet-l6j8p requesting local ephemeral resource =0 on Node v126-worker2 Apr 11 18:05:05.555: INFO: Pod kube-proxy-zhx9l requesting local ephemeral resource =0 on Node v126-worker2 Apr 11 18:05:05.555: INFO: Pod with-tolerations requesting local ephemeral resource =0 on Node v126-worker2 Apr 11 18:05:05.555: INFO: Using pod capacity: 47055905587 Apr 11 18:05:05.555: INFO: Node: v126-worker2 has local ephemeral resource allocatable: 470559055872 STEP: Starting additional 10 Pods to fully saturate the cluster local ephemeral resource and trying to start another one 04/11/24 18:05:05.555 Apr 11 18:05:05.609: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.17c54beae7d1335a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/overcommit-0 to v126-worker2] 04/11/24 18:05:15.668 STEP: Considering event: Type = [Normal], Name = [overcommit-0.17c54beb1cfa03f4], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:05:15.669 STEP: Considering event: Type = [Normal], Name = [overcommit-0.17c54beb1ddbd4ae], Reason = [Created], Message = [Created container overcommit-0] 04/11/24 18:05:15.669 STEP: Considering event: Type = [Normal], Name = [overcommit-0.17c54beb2e84e3ac], Reason = [Started], Message = [Started container overcommit-0] 04/11/24 18:05:15.669 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17c54beae823614a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/overcommit-1 to v126-worker2] 04/11/24 18:05:15.669 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17c54beb1e2f7069], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:05:15.669 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17c54beb1f1450c0], Reason = [Created], Message = [Created container overcommit-1] 04/11/24 18:05:15.669 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17c54beb2ea1decc], Reason = [Started], Message = [Started container overcommit-1] 04/11/24 18:05:15.669 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17c54beae8750638], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/overcommit-2 to v126-worker2] 04/11/24 18:05:15.669 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17c54beb43da2df7], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:05:15.669 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17c54beb449b106d], Reason = [Created], Message = [Created container overcommit-2] 04/11/24 18:05:15.67 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17c54beb53393dce], Reason = [Started], Message = [Started container overcommit-2] 04/11/24 18:05:15.67 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17c54beae8bfc32a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/overcommit-3 to v126-worker2] 04/11/24 18:05:15.67 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17c54beb31c4baea], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:05:15.67 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17c54beb32868426], Reason = [Created], Message = [Created container overcommit-3] 04/11/24 18:05:15.67 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17c54beb40f3829b], Reason = [Started], Message = [Started container overcommit-3] 04/11/24 18:05:15.67 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17c54beae90b1476], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/overcommit-4 to v126-worker2] 04/11/24 18:05:15.67 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17c54beb8a761d4e], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:05:15.67 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17c54beb8b2508fd], Reason = [Created], Message = [Created container overcommit-4] 04/11/24 18:05:15.67 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17c54beb99d49df2], Reason = [Started], Message = [Started container overcommit-4] 04/11/24 18:05:15.671 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17c54beae95a6e5b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/overcommit-5 to v126-worker2] 04/11/24 18:05:15.671 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17c54beb646e465b], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:05:15.671 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17c54beb653231fd], Reason = [Created], Message = [Created container overcommit-5] 04/11/24 18:05:15.671 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17c54beb75433c30], Reason = [Started], Message = [Started container overcommit-5] 04/11/24 18:05:15.671 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17c54beae9a98079], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/overcommit-6 to v126-worker2] 04/11/24 18:05:15.671 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17c54beb5672af10], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:05:15.671 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17c54beb57554de6], Reason = [Created], Message = [Created container overcommit-6] 04/11/24 18:05:15.671 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17c54beb663632a8], Reason = [Started], Message = [Started container overcommit-6] 04/11/24 18:05:15.671 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17c54beae9fe0f2c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/overcommit-7 to v126-worker2] 04/11/24 18:05:15.672 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17c54beb557ea83e], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:05:15.672 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17c54beb562e6c37], Reason = [Created], Message = [Created container overcommit-7] 04/11/24 18:05:15.672 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17c54beb65fab42a], Reason = [Started], Message = [Started container overcommit-7] 04/11/24 18:05:15.672 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17c54beaea44d580], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/overcommit-8 to v126-worker2] 04/11/24 18:05:15.672 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17c54beb797794cd], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:05:15.672 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17c54beb7a29d1fd], Reason = [Created], Message = [Created container overcommit-8] 04/11/24 18:05:15.672 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17c54beb87ab0a36], Reason = [Started], Message = [Started container overcommit-8] 04/11/24 18:05:15.672 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17c54beaea8f95a1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/overcommit-9 to v126-worker2] 04/11/24 18:05:15.672 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17c54beb7a7e3c7c], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:05:15.673 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17c54beb7b120222], Reason = [Created], Message = [Created container overcommit-9] 04/11/24 18:05:15.673 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17c54beb88910179], Reason = [Started], Message = [Started container overcommit-9] 04/11/24 18:05:15.673 STEP: Considering event: Type = [Warning], Name = [additional-pod.17c54bed4257e585], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 Insufficient ephemeral-storage, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1 node(s) were unschedulable. preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling..] 04/11/24 18:05:15.68 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:05:16.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-8277" for this suite. 04/11/24 18:05:16.691 ------------------------------ • [SLOW TEST] [11.198 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] test/e2e/scheduling/predicates.go:127 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:05:05.498 Apr 11 18:05:05.499: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 18:05:05.503 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:05:05.515 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:05:05.519 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 18:05:05.523: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 18:05:05.531: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:05:05.534: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 18:05:05.540: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 18:05:05.541: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:05:05.541: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:05:05.541: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:05:05.541: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:05:05.541: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 18:05:05.541: INFO: with-tolerations from sched-priority-3540 started at 2024-04-11 18:05:01 +0000 UTC (1 container statuses recorded) Apr 11 18:05:05.541: INFO: Container with-tolerations ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] test/e2e/scheduling/predicates.go:127 Apr 11 18:05:05.555: INFO: Pod create-loop-devs-tmv9n requesting local ephemeral resource =0 on Node v126-worker2 Apr 11 18:05:05.555: INFO: Pod kindnet-l6j8p requesting local ephemeral resource =0 on Node v126-worker2 Apr 11 18:05:05.555: INFO: Pod kube-proxy-zhx9l requesting local ephemeral resource =0 on Node v126-worker2 Apr 11 18:05:05.555: INFO: Pod with-tolerations requesting local ephemeral resource =0 on Node v126-worker2 Apr 11 18:05:05.555: INFO: Using pod capacity: 47055905587 Apr 11 18:05:05.555: INFO: Node: v126-worker2 has local ephemeral resource allocatable: 470559055872 STEP: Starting additional 10 Pods to fully saturate the cluster local ephemeral resource and trying to start another one 04/11/24 18:05:05.555 Apr 11 18:05:05.609: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.17c54beae7d1335a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/overcommit-0 to v126-worker2] 04/11/24 18:05:15.668 STEP: Considering event: Type = [Normal], Name = [overcommit-0.17c54beb1cfa03f4], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:05:15.669 STEP: Considering event: Type = [Normal], Name = [overcommit-0.17c54beb1ddbd4ae], Reason = [Created], Message = [Created container overcommit-0] 04/11/24 18:05:15.669 STEP: Considering event: Type = [Normal], Name = [overcommit-0.17c54beb2e84e3ac], Reason = [Started], Message = [Started container overcommit-0] 04/11/24 18:05:15.669 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17c54beae823614a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/overcommit-1 to v126-worker2] 04/11/24 18:05:15.669 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17c54beb1e2f7069], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:05:15.669 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17c54beb1f1450c0], Reason = [Created], Message = [Created container overcommit-1] 04/11/24 18:05:15.669 STEP: Considering event: Type = [Normal], Name = [overcommit-1.17c54beb2ea1decc], Reason = [Started], Message = [Started container overcommit-1] 04/11/24 18:05:15.669 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17c54beae8750638], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/overcommit-2 to v126-worker2] 04/11/24 18:05:15.669 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17c54beb43da2df7], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:05:15.669 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17c54beb449b106d], Reason = [Created], Message = [Created container overcommit-2] 04/11/24 18:05:15.67 STEP: Considering event: Type = [Normal], Name = [overcommit-2.17c54beb53393dce], Reason = [Started], Message = [Started container overcommit-2] 04/11/24 18:05:15.67 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17c54beae8bfc32a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/overcommit-3 to v126-worker2] 04/11/24 18:05:15.67 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17c54beb31c4baea], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:05:15.67 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17c54beb32868426], Reason = [Created], Message = [Created container overcommit-3] 04/11/24 18:05:15.67 STEP: Considering event: Type = [Normal], Name = [overcommit-3.17c54beb40f3829b], Reason = [Started], Message = [Started container overcommit-3] 04/11/24 18:05:15.67 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17c54beae90b1476], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/overcommit-4 to v126-worker2] 04/11/24 18:05:15.67 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17c54beb8a761d4e], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:05:15.67 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17c54beb8b2508fd], Reason = [Created], Message = [Created container overcommit-4] 04/11/24 18:05:15.67 STEP: Considering event: Type = [Normal], Name = [overcommit-4.17c54beb99d49df2], Reason = [Started], Message = [Started container overcommit-4] 04/11/24 18:05:15.671 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17c54beae95a6e5b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/overcommit-5 to v126-worker2] 04/11/24 18:05:15.671 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17c54beb646e465b], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:05:15.671 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17c54beb653231fd], Reason = [Created], Message = [Created container overcommit-5] 04/11/24 18:05:15.671 STEP: Considering event: Type = [Normal], Name = [overcommit-5.17c54beb75433c30], Reason = [Started], Message = [Started container overcommit-5] 04/11/24 18:05:15.671 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17c54beae9a98079], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/overcommit-6 to v126-worker2] 04/11/24 18:05:15.671 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17c54beb5672af10], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:05:15.671 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17c54beb57554de6], Reason = [Created], Message = [Created container overcommit-6] 04/11/24 18:05:15.671 STEP: Considering event: Type = [Normal], Name = [overcommit-6.17c54beb663632a8], Reason = [Started], Message = [Started container overcommit-6] 04/11/24 18:05:15.671 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17c54beae9fe0f2c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/overcommit-7 to v126-worker2] 04/11/24 18:05:15.672 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17c54beb557ea83e], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:05:15.672 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17c54beb562e6c37], Reason = [Created], Message = [Created container overcommit-7] 04/11/24 18:05:15.672 STEP: Considering event: Type = [Normal], Name = [overcommit-7.17c54beb65fab42a], Reason = [Started], Message = [Started container overcommit-7] 04/11/24 18:05:15.672 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17c54beaea44d580], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/overcommit-8 to v126-worker2] 04/11/24 18:05:15.672 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17c54beb797794cd], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:05:15.672 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17c54beb7a29d1fd], Reason = [Created], Message = [Created container overcommit-8] 04/11/24 18:05:15.672 STEP: Considering event: Type = [Normal], Name = [overcommit-8.17c54beb87ab0a36], Reason = [Started], Message = [Started container overcommit-8] 04/11/24 18:05:15.672 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17c54beaea8f95a1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8277/overcommit-9 to v126-worker2] 04/11/24 18:05:15.672 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17c54beb7a7e3c7c], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:05:15.673 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17c54beb7b120222], Reason = [Created], Message = [Created container overcommit-9] 04/11/24 18:05:15.673 STEP: Considering event: Type = [Normal], Name = [overcommit-9.17c54beb88910179], Reason = [Started], Message = [Started container overcommit-9] 04/11/24 18:05:15.673 STEP: Considering event: Type = [Warning], Name = [additional-pod.17c54bed4257e585], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 Insufficient ephemeral-storage, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1 node(s) were unschedulable. preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling..] 04/11/24 18:05:15.68 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:05:16.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-8277" for this suite. 04/11/24 18:05:16.691 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed test/e2e/scheduling/priorities.go:288 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:05:16.706 Apr 11 18:05:16.706: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 04/11/24 18:05:16.708 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:05:16.72 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:05:16.724 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 Apr 11 18:05:16.728: INFO: Waiting up to 1m0s for all nodes to be ready Apr 11 18:06:16.756: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:06:16.760: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 11 18:06:16.773: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 11 18:06:16.773: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 11 18:06:16.781: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 11 18:06:16.781: INFO: Pod for on the node: create-loop-devs-tmv9n, Cpu: 100, Mem: 209715200 Apr 11 18:06:16.781: INFO: Pod for on the node: kindnet-l6j8p, Cpu: 100, Mem: 52428800 Apr 11 18:06:16.781: INFO: Pod for on the node: kube-proxy-zhx9l, Cpu: 100, Mem: 209715200 Apr 11 18:06:16.781: INFO: Node: v126-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 11 18:06:16.781: INFO: Node: v126-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 [BeforeEach] PodTopologySpread Scoring test/e2e/scheduling/priorities.go:271 [AfterEach] PodTopologySpread Scoring test/e2e/scheduling/priorities.go:282 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:06:16.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-priority-3071" for this suite. 04/11/24 18:06:16.787 ------------------------------ S [SKIPPED] [60.086 seconds] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring [BeforeEach] test/e2e/scheduling/priorities.go:271 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed test/e2e/scheduling/priorities.go:288 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:05:16.706 Apr 11 18:05:16.706: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 04/11/24 18:05:16.708 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:05:16.72 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:05:16.724 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 Apr 11 18:05:16.728: INFO: Waiting up to 1m0s for all nodes to be ready Apr 11 18:06:16.756: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:06:16.760: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 11 18:06:16.773: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 11 18:06:16.773: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 11 18:06:16.781: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 11 18:06:16.781: INFO: Pod for on the node: create-loop-devs-tmv9n, Cpu: 100, Mem: 209715200 Apr 11 18:06:16.781: INFO: Pod for on the node: kindnet-l6j8p, Cpu: 100, Mem: 52428800 Apr 11 18:06:16.781: INFO: Pod for on the node: kube-proxy-zhx9l, Cpu: 100, Mem: 209715200 Apr 11 18:06:16.781: INFO: Node: v126-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 11 18:06:16.781: INFO: Node: v126-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 [BeforeEach] PodTopologySpread Scoring test/e2e/scheduling/priorities.go:271 [AfterEach] PodTopologySpread Scoring test/e2e/scheduling/priorities.go:282 [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:06:16.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-priority-3071" for this suite. 04/11/24 18:06:16.787 << End Captured GinkgoWriter Output At least 2 nodes are required to run the test In [BeforeEach] at: test/e2e/scheduling/priorities.go:273 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol test/e2e/scheduling/predicates.go:665 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:06:16.837 Apr 11 18:06:16.837: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 18:06:16.839 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:06:16.851 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:06:16.855 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 18:06:16.859: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 18:06:16.868: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:06:16.871: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 18:06:16.881: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 18:06:16.881: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:06:16.881: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:06:16.881: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:06:16.881: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:06:16.881: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol test/e2e/scheduling/predicates.go:665 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/11/24 18:06:16.881 Apr 11 18:06:16.900: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-9583" to be "running" Apr 11 18:06:16.904: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.21008ms Apr 11 18:06:18.908: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.008071446s Apr 11 18:06:18.908: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/11/24 18:06:18.912 STEP: Trying to apply a random label on the found node. 04/11/24 18:06:18.926 STEP: verifying the node has the label kubernetes.io/e2e-57d4710e-2da5-4141-a53c-d84c9468fa68 90 04/11/24 18:06:18.939 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled 04/11/24 18:06:18.943 Apr 11 18:06:18.948: INFO: Waiting up to 5m0s for pod "pod1" in namespace "sched-pred-9583" to be "not pending" Apr 11 18:06:18.951: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.023126ms Apr 11 18:06:20.956: INFO: Pod "pod1": Phase="Running", Reason="", readiness=false. Elapsed: 2.007777679s Apr 11 18:06:20.956: INFO: Pod "pod1" satisfied condition "not pending" STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.22.0.3 on the node which pod1 resides and expect scheduled 04/11/24 18:06:20.956 Apr 11 18:06:20.963: INFO: Waiting up to 5m0s for pod "pod2" in namespace "sched-pred-9583" to be "not pending" Apr 11 18:06:20.966: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.244554ms Apr 11 18:06:22.971: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007927638s Apr 11 18:06:22.971: INFO: Pod "pod2" satisfied condition "not pending" STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.22.0.3 but use UDP protocol on the node which pod2 resides 04/11/24 18:06:22.971 Apr 11 18:06:22.977: INFO: Waiting up to 5m0s for pod "pod3" in namespace "sched-pred-9583" to be "not pending" Apr 11 18:06:22.980: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.992195ms Apr 11 18:06:24.984: INFO: Pod "pod3": Phase="Running", Reason="", readiness=true. Elapsed: 2.007125929s Apr 11 18:06:24.984: INFO: Pod "pod3" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-57d4710e-2da5-4141-a53c-d84c9468fa68 off the node v126-worker2 04/11/24 18:06:24.984 STEP: verifying the node doesn't have the label kubernetes.io/e2e-57d4710e-2da5-4141-a53c-d84c9468fa68 04/11/24 18:06:25 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:06:25.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-9583" for this suite. 04/11/24 18:06:25.009 ------------------------------ • [SLOW TEST] [8.177 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol test/e2e/scheduling/predicates.go:665 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:06:16.837 Apr 11 18:06:16.837: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 18:06:16.839 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:06:16.851 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:06:16.855 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 18:06:16.859: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 18:06:16.868: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:06:16.871: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 18:06:16.881: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 18:06:16.881: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:06:16.881: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:06:16.881: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:06:16.881: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:06:16.881: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol test/e2e/scheduling/predicates.go:665 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/11/24 18:06:16.881 Apr 11 18:06:16.900: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-9583" to be "running" Apr 11 18:06:16.904: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.21008ms Apr 11 18:06:18.908: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.008071446s Apr 11 18:06:18.908: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/11/24 18:06:18.912 STEP: Trying to apply a random label on the found node. 04/11/24 18:06:18.926 STEP: verifying the node has the label kubernetes.io/e2e-57d4710e-2da5-4141-a53c-d84c9468fa68 90 04/11/24 18:06:18.939 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled 04/11/24 18:06:18.943 Apr 11 18:06:18.948: INFO: Waiting up to 5m0s for pod "pod1" in namespace "sched-pred-9583" to be "not pending" Apr 11 18:06:18.951: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.023126ms Apr 11 18:06:20.956: INFO: Pod "pod1": Phase="Running", Reason="", readiness=false. Elapsed: 2.007777679s Apr 11 18:06:20.956: INFO: Pod "pod1" satisfied condition "not pending" STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.22.0.3 on the node which pod1 resides and expect scheduled 04/11/24 18:06:20.956 Apr 11 18:06:20.963: INFO: Waiting up to 5m0s for pod "pod2" in namespace "sched-pred-9583" to be "not pending" Apr 11 18:06:20.966: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.244554ms Apr 11 18:06:22.971: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007927638s Apr 11 18:06:22.971: INFO: Pod "pod2" satisfied condition "not pending" STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.22.0.3 but use UDP protocol on the node which pod2 resides 04/11/24 18:06:22.971 Apr 11 18:06:22.977: INFO: Waiting up to 5m0s for pod "pod3" in namespace "sched-pred-9583" to be "not pending" Apr 11 18:06:22.980: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.992195ms Apr 11 18:06:24.984: INFO: Pod "pod3": Phase="Running", Reason="", readiness=true. Elapsed: 2.007125929s Apr 11 18:06:24.984: INFO: Pod "pod3" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-57d4710e-2da5-4141-a53c-d84c9468fa68 off the node v126-worker2 04/11/24 18:06:24.984 STEP: verifying the node doesn't have the label kubernetes.io/e2e-57d4710e-2da5-4141-a53c-d84c9468fa68 04/11/24 18:06:25 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:06:25.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-9583" for this suite. 04/11/24 18:06:25.009 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes test/e2e/scheduling/predicates.go:748 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:06:25.035 Apr 11 18:06:25.035: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 18:06:25.037 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:06:25.048 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:06:25.052 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 18:06:25.056: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 18:06:25.065: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:06:25.068: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 18:06:25.074: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.074: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:06:25.074: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.074: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:06:25.074: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.074: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 18:06:25.074: INFO: pod1 from sched-pred-9583 started at 2024-04-11 18:06:18 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.074: INFO: Container agnhost ready: true, restart count 0 Apr 11 18:06:25.074: INFO: pod2 from sched-pred-9583 started at 2024-04-11 18:06:20 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.074: INFO: Container agnhost ready: true, restart count 0 Apr 11 18:06:25.074: INFO: pod3 from sched-pred-9583 started at 2024-04-11 18:06:22 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.074: INFO: Container agnhost ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering test/e2e/scheduling/predicates.go:731 [AfterEach] PodTopologySpread Filtering test/e2e/scheduling/predicates.go:742 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:06:25.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-4099" for this suite. 04/11/24 18:06:25.078 ------------------------------ S [SKIPPED] [0.048 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering [BeforeEach] test/e2e/scheduling/predicates.go:731 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes test/e2e/scheduling/predicates.go:748 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:06:25.035 Apr 11 18:06:25.035: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 18:06:25.037 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:06:25.048 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:06:25.052 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 18:06:25.056: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 18:06:25.065: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:06:25.068: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 18:06:25.074: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.074: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:06:25.074: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.074: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:06:25.074: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.074: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 18:06:25.074: INFO: pod1 from sched-pred-9583 started at 2024-04-11 18:06:18 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.074: INFO: Container agnhost ready: true, restart count 0 Apr 11 18:06:25.074: INFO: pod2 from sched-pred-9583 started at 2024-04-11 18:06:20 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.074: INFO: Container agnhost ready: true, restart count 0 Apr 11 18:06:25.074: INFO: pod3 from sched-pred-9583 started at 2024-04-11 18:06:22 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.074: INFO: Container agnhost ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering test/e2e/scheduling/predicates.go:731 [AfterEach] PodTopologySpread Filtering test/e2e/scheduling/predicates.go:742 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:06:25.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-4099" for this suite. 04/11/24 18:06:25.078 << End Captured GinkgoWriter Output At least 2 nodes are required to run the test In [BeforeEach] at: test/e2e/scheduling/predicates.go:733 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for test/e2e/scheduling/predicates.go:276 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:06:25.155 Apr 11 18:06:25.155: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 18:06:25.157 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:06:25.167 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:06:25.171 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 18:06:25.174: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 18:06:25.182: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:06:25.185: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 18:06:25.192: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.192: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:06:25.192: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.192: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:06:25.192: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.192: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 18:06:25.192: INFO: pod1 from sched-pred-9583 started at 2024-04-11 18:06:18 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.192: INFO: Container agnhost ready: true, restart count 0 Apr 11 18:06:25.192: INFO: pod2 from sched-pred-9583 started at 2024-04-11 18:06:20 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.192: INFO: Container agnhost ready: true, restart count 0 Apr 11 18:06:25.192: INFO: pod3 from sched-pred-9583 started at 2024-04-11 18:06:22 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.192: INFO: Container agnhost ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run test/e2e/scheduling/predicates.go:221 STEP: Add RuntimeClass and fake resource 04/11/24 18:06:25.2 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/11/24 18:06:25.2 Apr 11 18:06:25.207: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-7740" to be "running" Apr 11 18:06:25.210: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.783091ms Apr 11 18:06:27.215: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007545388s Apr 11 18:06:27.215: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/11/24 18:06:27.219 Apr 11 18:06:27.248: INFO: Unexpected error: failed to create RuntimeClass resource: <*errors.StatusError | 0xc004ce7400>: { ErrStatus: code: 409 details: group: node.k8s.io kind: runtimeclasses name: test-handler message: runtimeclasses.node.k8s.io "test-handler" already exists metadata: {} reason: AlreadyExists status: Failure, } Apr 11 18:06:27.248: FAIL: failed to create RuntimeClass resource: runtimeclasses.node.k8s.io "test-handler" already exists Full Stack Trace k8s.io/kubernetes/test/e2e/scheduling.glob..func4.4.1() test/e2e/scheduling/predicates.go:253 +0x745 [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run test/e2e/scheduling/predicates.go:256 STEP: Remove fake resource and RuntimeClass 04/11/24 18:06:27.249 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:06:27.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 STEP: dump namespace information after failure 04/11/24 18:06:27.269 STEP: Collecting events from namespace "sched-pred-7740". 04/11/24 18:06:27.269 STEP: Found 4 events. 04/11/24 18:06:27.273 Apr 11 18:06:27.273: INFO: At 2024-04-11 18:06:25 +0000 UTC - event for without-label: {default-scheduler } Scheduled: Successfully assigned sched-pred-7740/without-label to v126-worker2 Apr 11 18:06:27.273: INFO: At 2024-04-11 18:06:25 +0000 UTC - event for without-label: {kubelet v126-worker2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Apr 11 18:06:27.273: INFO: At 2024-04-11 18:06:25 +0000 UTC - event for without-label: {kubelet v126-worker2} Created: Created container without-label Apr 11 18:06:27.273: INFO: At 2024-04-11 18:06:26 +0000 UTC - event for without-label: {kubelet v126-worker2} Started: Started container without-label Apr 11 18:06:27.275: INFO: POD NODE PHASE GRACE CONDITIONS Apr 11 18:06:27.275: INFO: Apr 11 18:06:27.280: INFO: Logging node info for node v126-control-plane Apr 11 18:06:27.283: INFO: Node Info: &Node{ObjectMeta:{v126-control-plane 3a64757e-5950-42e6-b8ed-4667f760117e 7537698 0 2024-02-15 12:43:04 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v126-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2024-02-15 12:43:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2024-02-15 12:43:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2024-02-15 12:43:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2024-04-11 18:06:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v126/v126-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2024-04-11 18:06:25 +0000 UTC,LastTransitionTime:2024-02-15 12:42:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2024-04-11 18:06:25 +0000 UTC,LastTransitionTime:2024-02-15 12:42:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2024-04-11 18:06:25 +0000 UTC,LastTransitionTime:2024-02-15 12:42:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2024-04-11 18:06:25 +0000 UTC,LastTransitionTime:2024-02-15 12:43:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.22.0.4,},NodeAddress{Type:Hostname,Address:v126-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a96e30d08f8c42b585519e2395c12ea2,SystemUUID:a3f13d5f-0717-4c0d-a2df-008e7d843a90,BootID:3ece24be-6f26-4926-9346-83e0950952a5,KernelVersion:5.15.0-53-generic,OSImage:Debian GNU/Linux 11 (bullseye),ContainerRuntimeVersion:containerd://1.7.1,KubeletVersion:v1.26.6,KubeProxyVersion:v1.26.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ba23b53b0e943e1556160fd3d7e445268699b578d6d1ffcce645a3cfafebb3db registry.k8s.io/kube-apiserver:v1.26.6],SizeBytes:80511487,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:719bf3e60c026520ed06d4f65a6df78f53a838e8675c058f25582d5067117d99 registry.k8s.io/kube-controller-manager:v1.26.6],SizeBytes:68657293,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ffc23ebf13cca095f49e15bc00a2fc75fc6ae75e10104169680d9cac711339b8 registry.k8s.io/kube-proxy:v1.26.6],SizeBytes:67229690,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:b67a0068e2439c496b04bb021d953b966868421451aa88f2c3701c6b4ab77d4f registry.k8s.io/kube-scheduler:v1.26.6],SizeBytes:57880717,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230511-dc714da8],SizeBytes:27731571,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20230511-dc714da8],SizeBytes:19351145,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:23d4ae0566b98dfee53d4b7a9ef824b6ed1c6b3a8f52bab927e5521f871b5104 docker.io/aquasec/kube-bench:v0.6.10],SizeBytes:18243491,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230510-486859a6],SizeBytes:3052318,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 11 18:06:27.284: INFO: Logging kubelet events for node v126-control-plane Apr 11 18:06:27.287: INFO: Logging pods the kubelet thinks is on node v126-control-plane Apr 11 18:06:27.312: INFO: kube-apiserver-v126-control-plane started at 2024-02-15 12:43:09 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.312: INFO: Container kube-apiserver ready: true, restart count 0 Apr 11 18:06:27.312: INFO: kube-controller-manager-v126-control-plane started at 2024-02-15 12:43:09 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.312: INFO: Container kube-controller-manager ready: true, restart count 0 Apr 11 18:06:27.312: INFO: coredns-787d4945fb-w6k86 started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.312: INFO: Container coredns ready: true, restart count 0 Apr 11 18:06:27.312: INFO: coredns-787d4945fb-xp5nv started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.312: INFO: Container coredns ready: true, restart count 0 Apr 11 18:06:27.312: INFO: local-path-provisioner-6bd6454576-2g84t started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.312: INFO: Container local-path-provisioner ready: true, restart count 0 Apr 11 18:06:27.312: INFO: create-loop-devs-d8k28 started at 2024-02-15 12:43:26 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.312: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:06:27.312: INFO: etcd-v126-control-plane started at 2024-02-15 12:43:08 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.312: INFO: Container etcd ready: true, restart count 0 Apr 11 18:06:27.312: INFO: kube-scheduler-v126-control-plane started at 2024-02-15 12:43:08 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.312: INFO: Container kube-scheduler ready: true, restart count 0 Apr 11 18:06:27.312: INFO: kube-proxy-lxqfk started at 2024-02-15 12:43:20 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.312: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 18:06:27.312: INFO: kindnet-vn4j4 started at 2024-02-15 12:43:20 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.312: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:06:27.385: INFO: Latency metrics for node v126-control-plane Apr 11 18:06:27.385: INFO: Logging node info for node v126-worker Apr 11 18:06:27.388: INFO: Node Info: &Node{ObjectMeta:{v126-worker d69cee07-558d-4498-86d9-cff1abedd857 7537232 0 2024-02-15 12:43:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v126-worker kubernetes.io/os:linux topology.hostpath.csi/node:v126-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2024-02-15 12:43:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2024-02-15 12:43:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {e2e.test Update v1 2024-03-28 18:03:36 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}}}} status} {kube-controller-manager Update v1 2024-03-28 19:11:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}} } {kubectl Update v1 2024-03-28 19:11:09 +0000 UTC FieldsV1 {"f:spec":{"f:unschedulable":{}}} } {kubelet Update v1 2024-04-11 18:04:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v126/v126-worker,Unschedulable:true,Taints:[]Taint{Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:2024-03-28 19:11:09 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{1 0} {} 1 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{1 0} {} 1 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2024-04-11 18:04:36 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2024-04-11 18:04:36 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2024-04-11 18:04:36 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2024-04-11 18:04:36 +0000 UTC,LastTransitionTime:2024-02-15 12:43:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.22.0.2,},NodeAddress{Type:Hostname,Address:v126-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d18212626141459c831725483d7679ab,SystemUUID:398bd568-4555-4b1a-8660-f75be5056848,BootID:3ece24be-6f26-4926-9346-83e0950952a5,KernelVersion:5.15.0-53-generic,OSImage:Debian GNU/Linux 11 (bullseye),ContainerRuntimeVersion:containerd://1.7.1,KubeletVersion:v1.26.6,KubeProxyVersion:v1.26.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[docker.io/litmuschaos/go-runner@sha256:b4aaa2ee36bf687dd0f147ced7dce708398fae6d8410067c9ad9a90f162d55e5 docker.io/litmuschaos/go-runner:2.14.0],SizeBytes:170207512,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ba23b53b0e943e1556160fd3d7e445268699b578d6d1ffcce645a3cfafebb3db registry.k8s.io/kube-apiserver:v1.26.6],SizeBytes:80511487,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:719bf3e60c026520ed06d4f65a6df78f53a838e8675c058f25582d5067117d99 registry.k8s.io/kube-controller-manager:v1.26.6],SizeBytes:68657293,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ffc23ebf13cca095f49e15bc00a2fc75fc6ae75e10104169680d9cac711339b8 registry.k8s.io/kube-proxy:v1.26.6],SizeBytes:67229690,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:b67a0068e2439c496b04bb021d953b966868421451aa88f2c3701c6b4ab77d4f registry.k8s.io/kube-scheduler:v1.26.6],SizeBytes:57880717,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2 registry.k8s.io/etcd:3.5.10-0],SizeBytes:56649232,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:e64fe49f059f513a09c772a8972172b2af6833d092c06cc311171d7135e4525a docker.io/aquasec/kube-hunter:0.6.8],SizeBytes:38278203,},ContainerImage{Names:[docker.io/litmuschaos/chaos-operator@sha256:69b1a6ff1409fc80cf169503e29d10e049b46108e57436e452e3800f5f911d70 docker.io/litmuschaos/chaos-operator:2.14.0],SizeBytes:28963838,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230511-dc714da8],SizeBytes:27731571,},ContainerImage{Names:[docker.io/litmuschaos/chaos-runner@sha256:a5fcf3f1766975ec6e4730c0aefdf9705af20c67d9ff68372168c8856acba7af docker.io/litmuschaos/chaos-runner:2.14.0],SizeBytes:26125622,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20230511-dc714da8],SizeBytes:19351145,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:23d4ae0566b98dfee53d4b7a9ef824b6ed1c6b3a8f52bab927e5521f871b5104 docker.io/aquasec/kube-bench:v0.6.10],SizeBytes:18243491,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:fc259355994e6c6c1025a7cd2d1bdbf201708e9e11ef1dfd3ef787a7ce45730d registry.k8s.io/build-image/distroless-iptables:v0.2.9],SizeBytes:9501695,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230510-486859a6],SizeBytes:3052318,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 11 18:06:27.389: INFO: Logging kubelet events for node v126-worker Apr 11 18:06:27.392: INFO: Logging pods the kubelet thinks is on node v126-worker Apr 11 18:06:27.415: INFO: create-loop-devs-qf7hw started at 2024-02-15 12:43:26 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.415: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:06:27.415: INFO: kindnet-llt78 started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.415: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:06:27.415: INFO: kube-proxy-6gjpv started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.415: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 18:06:27.681: INFO: Latency metrics for node v126-worker Apr 11 18:06:27.681: INFO: Logging node info for node v126-worker2 Apr 11 18:06:27.684: INFO: Node Info: &Node{ObjectMeta:{v126-worker2 325f688d-d472-4d00-af05-b1602ff4d011 7537708 0 2024-02-15 12:43:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v126-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:v126-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2024-02-15 12:43:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2024-02-15 12:43:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2024-03-23 10:52:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2024-04-11 18:03:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {e2e.test Update v1 2024-04-11 18:06:27 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakecpu":{}},"f:capacity":{"f:example.com/fakecpu":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v126/v126-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2024-04-11 18:03:29 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2024-04-11 18:03:29 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2024-04-11 18:03:29 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2024-04-11 18:03:29 +0000 UTC,LastTransitionTime:2024-02-15 12:43:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.22.0.3,},NodeAddress{Type:Hostname,Address:v126-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9a4f500a92ab44e68eb943ba261bf2b3,SystemUUID:3a962073-037f-4c28-a122-8f4b5dfc4ca0,BootID:3ece24be-6f26-4926-9346-83e0950952a5,KernelVersion:5.15.0-53-generic,OSImage:Debian GNU/Linux 11 (bullseye),ContainerRuntimeVersion:containerd://1.7.1,KubeletVersion:v1.26.6,KubeProxyVersion:v1.26.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/litmuschaos/go-runner@sha256:b4aaa2ee36bf687dd0f147ced7dce708398fae6d8410067c9ad9a90f162d55e5 docker.io/litmuschaos/go-runner:2.14.0],SizeBytes:170207512,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ba23b53b0e943e1556160fd3d7e445268699b578d6d1ffcce645a3cfafebb3db registry.k8s.io/kube-apiserver:v1.26.6],SizeBytes:80511487,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:719bf3e60c026520ed06d4f65a6df78f53a838e8675c058f25582d5067117d99 registry.k8s.io/kube-controller-manager:v1.26.6],SizeBytes:68657293,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ffc23ebf13cca095f49e15bc00a2fc75fc6ae75e10104169680d9cac711339b8 registry.k8s.io/kube-proxy:v1.26.6],SizeBytes:67229690,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:b67a0068e2439c496b04bb021d953b966868421451aa88f2c3701c6b4ab77d4f registry.k8s.io/kube-scheduler:v1.26.6],SizeBytes:57880717,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2 registry.k8s.io/etcd:3.5.10-0],SizeBytes:56649232,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/litmuschaos/chaos-operator@sha256:69b1a6ff1409fc80cf169503e29d10e049b46108e57436e452e3800f5f911d70 docker.io/litmuschaos/chaos-operator:2.14.0],SizeBytes:28963838,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230511-dc714da8],SizeBytes:27731571,},ContainerImage{Names:[docker.io/litmuschaos/chaos-runner@sha256:a5fcf3f1766975ec6e4730c0aefdf9705af20c67d9ff68372168c8856acba7af docker.io/litmuschaos/chaos-runner:2.14.0],SizeBytes:26125622,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20230511-dc714da8],SizeBytes:19351145,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230510-486859a6],SizeBytes:3052318,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 11 18:06:27.684: INFO: Logging kubelet events for node v126-worker2 Apr 11 18:06:27.687: INFO: Logging pods the kubelet thinks is on node v126-worker2 Apr 11 18:06:27.711: INFO: create-loop-devs-tmv9n started at 2024-02-15 12:43:26 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.712: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:06:27.712: INFO: kube-proxy-zhx9l started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.712: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 18:06:27.712: INFO: pod1 started at 2024-04-11 18:06:18 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.712: INFO: Container agnhost ready: true, restart count 0 Apr 11 18:06:27.712: INFO: kindnet-l6j8p started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.712: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:06:27.712: INFO: pod2 started at 2024-04-11 18:06:20 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.712: INFO: Container agnhost ready: true, restart count 0 Apr 11 18:06:27.712: INFO: pod3 started at 2024-04-11 18:06:22 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.712: INFO: Container agnhost ready: true, restart count 0 Apr 11 18:06:28.042: INFO: Latency metrics for node v126-worker2 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-7740" for this suite. 04/11/24 18:06:28.042 ------------------------------ • [FAILED] [2.894 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run [BeforeEach] test/e2e/scheduling/predicates.go:221 verify pod overhead is accounted for test/e2e/scheduling/predicates.go:276 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:06:25.155 Apr 11 18:06:25.155: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 18:06:25.157 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:06:25.167 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:06:25.171 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 18:06:25.174: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 18:06:25.182: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:06:25.185: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 18:06:25.192: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.192: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:06:25.192: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.192: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:06:25.192: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.192: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 18:06:25.192: INFO: pod1 from sched-pred-9583 started at 2024-04-11 18:06:18 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.192: INFO: Container agnhost ready: true, restart count 0 Apr 11 18:06:25.192: INFO: pod2 from sched-pred-9583 started at 2024-04-11 18:06:20 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.192: INFO: Container agnhost ready: true, restart count 0 Apr 11 18:06:25.192: INFO: pod3 from sched-pred-9583 started at 2024-04-11 18:06:22 +0000 UTC (1 container statuses recorded) Apr 11 18:06:25.192: INFO: Container agnhost ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run test/e2e/scheduling/predicates.go:221 STEP: Add RuntimeClass and fake resource 04/11/24 18:06:25.2 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/11/24 18:06:25.2 Apr 11 18:06:25.207: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-7740" to be "running" Apr 11 18:06:25.210: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.783091ms Apr 11 18:06:27.215: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007545388s Apr 11 18:06:27.215: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/11/24 18:06:27.219 Apr 11 18:06:27.248: INFO: Unexpected error: failed to create RuntimeClass resource: <*errors.StatusError | 0xc004ce7400>: { ErrStatus: code: 409 details: group: node.k8s.io kind: runtimeclasses name: test-handler message: runtimeclasses.node.k8s.io "test-handler" already exists metadata: {} reason: AlreadyExists status: Failure, } Apr 11 18:06:27.248: FAIL: failed to create RuntimeClass resource: runtimeclasses.node.k8s.io "test-handler" already exists Full Stack Trace k8s.io/kubernetes/test/e2e/scheduling.glob..func4.4.1() test/e2e/scheduling/predicates.go:253 +0x745 [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run test/e2e/scheduling/predicates.go:256 STEP: Remove fake resource and RuntimeClass 04/11/24 18:06:27.249 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:06:27.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 STEP: dump namespace information after failure 04/11/24 18:06:27.269 STEP: Collecting events from namespace "sched-pred-7740". 04/11/24 18:06:27.269 STEP: Found 4 events. 04/11/24 18:06:27.273 Apr 11 18:06:27.273: INFO: At 2024-04-11 18:06:25 +0000 UTC - event for without-label: {default-scheduler } Scheduled: Successfully assigned sched-pred-7740/without-label to v126-worker2 Apr 11 18:06:27.273: INFO: At 2024-04-11 18:06:25 +0000 UTC - event for without-label: {kubelet v126-worker2} Pulled: Container image "registry.k8s.io/pause:3.9" already present on machine Apr 11 18:06:27.273: INFO: At 2024-04-11 18:06:25 +0000 UTC - event for without-label: {kubelet v126-worker2} Created: Created container without-label Apr 11 18:06:27.273: INFO: At 2024-04-11 18:06:26 +0000 UTC - event for without-label: {kubelet v126-worker2} Started: Started container without-label Apr 11 18:06:27.275: INFO: POD NODE PHASE GRACE CONDITIONS Apr 11 18:06:27.275: INFO: Apr 11 18:06:27.280: INFO: Logging node info for node v126-control-plane Apr 11 18:06:27.283: INFO: Node Info: &Node{ObjectMeta:{v126-control-plane 3a64757e-5950-42e6-b8ed-4667f760117e 7537698 0 2024-02-15 12:43:04 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v126-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2024-02-15 12:43:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2024-02-15 12:43:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2024-02-15 12:43:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2024-04-11 18:06:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v126/v126-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2024-04-11 18:06:25 +0000 UTC,LastTransitionTime:2024-02-15 12:42:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2024-04-11 18:06:25 +0000 UTC,LastTransitionTime:2024-02-15 12:42:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2024-04-11 18:06:25 +0000 UTC,LastTransitionTime:2024-02-15 12:42:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2024-04-11 18:06:25 +0000 UTC,LastTransitionTime:2024-02-15 12:43:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.22.0.4,},NodeAddress{Type:Hostname,Address:v126-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a96e30d08f8c42b585519e2395c12ea2,SystemUUID:a3f13d5f-0717-4c0d-a2df-008e7d843a90,BootID:3ece24be-6f26-4926-9346-83e0950952a5,KernelVersion:5.15.0-53-generic,OSImage:Debian GNU/Linux 11 (bullseye),ContainerRuntimeVersion:containerd://1.7.1,KubeletVersion:v1.26.6,KubeProxyVersion:v1.26.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ba23b53b0e943e1556160fd3d7e445268699b578d6d1ffcce645a3cfafebb3db registry.k8s.io/kube-apiserver:v1.26.6],SizeBytes:80511487,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:719bf3e60c026520ed06d4f65a6df78f53a838e8675c058f25582d5067117d99 registry.k8s.io/kube-controller-manager:v1.26.6],SizeBytes:68657293,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ffc23ebf13cca095f49e15bc00a2fc75fc6ae75e10104169680d9cac711339b8 registry.k8s.io/kube-proxy:v1.26.6],SizeBytes:67229690,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:b67a0068e2439c496b04bb021d953b966868421451aa88f2c3701c6b4ab77d4f registry.k8s.io/kube-scheduler:v1.26.6],SizeBytes:57880717,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230511-dc714da8],SizeBytes:27731571,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20230511-dc714da8],SizeBytes:19351145,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:23d4ae0566b98dfee53d4b7a9ef824b6ed1c6b3a8f52bab927e5521f871b5104 docker.io/aquasec/kube-bench:v0.6.10],SizeBytes:18243491,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230510-486859a6],SizeBytes:3052318,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 11 18:06:27.284: INFO: Logging kubelet events for node v126-control-plane Apr 11 18:06:27.287: INFO: Logging pods the kubelet thinks is on node v126-control-plane Apr 11 18:06:27.312: INFO: kube-apiserver-v126-control-plane started at 2024-02-15 12:43:09 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.312: INFO: Container kube-apiserver ready: true, restart count 0 Apr 11 18:06:27.312: INFO: kube-controller-manager-v126-control-plane started at 2024-02-15 12:43:09 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.312: INFO: Container kube-controller-manager ready: true, restart count 0 Apr 11 18:06:27.312: INFO: coredns-787d4945fb-w6k86 started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.312: INFO: Container coredns ready: true, restart count 0 Apr 11 18:06:27.312: INFO: coredns-787d4945fb-xp5nv started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.312: INFO: Container coredns ready: true, restart count 0 Apr 11 18:06:27.312: INFO: local-path-provisioner-6bd6454576-2g84t started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.312: INFO: Container local-path-provisioner ready: true, restart count 0 Apr 11 18:06:27.312: INFO: create-loop-devs-d8k28 started at 2024-02-15 12:43:26 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.312: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:06:27.312: INFO: etcd-v126-control-plane started at 2024-02-15 12:43:08 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.312: INFO: Container etcd ready: true, restart count 0 Apr 11 18:06:27.312: INFO: kube-scheduler-v126-control-plane started at 2024-02-15 12:43:08 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.312: INFO: Container kube-scheduler ready: true, restart count 0 Apr 11 18:06:27.312: INFO: kube-proxy-lxqfk started at 2024-02-15 12:43:20 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.312: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 18:06:27.312: INFO: kindnet-vn4j4 started at 2024-02-15 12:43:20 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.312: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:06:27.385: INFO: Latency metrics for node v126-control-plane Apr 11 18:06:27.385: INFO: Logging node info for node v126-worker Apr 11 18:06:27.388: INFO: Node Info: &Node{ObjectMeta:{v126-worker d69cee07-558d-4498-86d9-cff1abedd857 7537232 0 2024-02-15 12:43:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v126-worker kubernetes.io/os:linux topology.hostpath.csi/node:v126-worker] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2024-02-15 12:43:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2024-02-15 12:43:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {e2e.test Update v1 2024-03-28 18:03:36 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:example.com/fakecpu":{},"f:scheduling.k8s.io/foo":{}}}} status} {kube-controller-manager Update v1 2024-03-28 19:11:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}} } {kubectl Update v1 2024-03-28 19:11:09 +0000 UTC FieldsV1 {"f:spec":{"f:unschedulable":{}}} } {kubelet Update v1 2024-04-11 18:04:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v126/v126-worker,Unschedulable:true,Taints:[]Taint{Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:2024-03-28 19:11:09 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{1 0} {} 1 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{1 0} {} 1 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2024-04-11 18:04:36 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2024-04-11 18:04:36 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2024-04-11 18:04:36 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2024-04-11 18:04:36 +0000 UTC,LastTransitionTime:2024-02-15 12:43:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.22.0.2,},NodeAddress{Type:Hostname,Address:v126-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d18212626141459c831725483d7679ab,SystemUUID:398bd568-4555-4b1a-8660-f75be5056848,BootID:3ece24be-6f26-4926-9346-83e0950952a5,KernelVersion:5.15.0-53-generic,OSImage:Debian GNU/Linux 11 (bullseye),ContainerRuntimeVersion:containerd://1.7.1,KubeletVersion:v1.26.6,KubeProxyVersion:v1.26.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[docker.io/litmuschaos/go-runner@sha256:b4aaa2ee36bf687dd0f147ced7dce708398fae6d8410067c9ad9a90f162d55e5 docker.io/litmuschaos/go-runner:2.14.0],SizeBytes:170207512,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ba23b53b0e943e1556160fd3d7e445268699b578d6d1ffcce645a3cfafebb3db registry.k8s.io/kube-apiserver:v1.26.6],SizeBytes:80511487,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:719bf3e60c026520ed06d4f65a6df78f53a838e8675c058f25582d5067117d99 registry.k8s.io/kube-controller-manager:v1.26.6],SizeBytes:68657293,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ffc23ebf13cca095f49e15bc00a2fc75fc6ae75e10104169680d9cac711339b8 registry.k8s.io/kube-proxy:v1.26.6],SizeBytes:67229690,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:b67a0068e2439c496b04bb021d953b966868421451aa88f2c3701c6b4ab77d4f registry.k8s.io/kube-scheduler:v1.26.6],SizeBytes:57880717,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2 registry.k8s.io/etcd:3.5.10-0],SizeBytes:56649232,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:e64fe49f059f513a09c772a8972172b2af6833d092c06cc311171d7135e4525a docker.io/aquasec/kube-hunter:0.6.8],SizeBytes:38278203,},ContainerImage{Names:[docker.io/litmuschaos/chaos-operator@sha256:69b1a6ff1409fc80cf169503e29d10e049b46108e57436e452e3800f5f911d70 docker.io/litmuschaos/chaos-operator:2.14.0],SizeBytes:28963838,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230511-dc714da8],SizeBytes:27731571,},ContainerImage{Names:[docker.io/litmuschaos/chaos-runner@sha256:a5fcf3f1766975ec6e4730c0aefdf9705af20c67d9ff68372168c8856acba7af docker.io/litmuschaos/chaos-runner:2.14.0],SizeBytes:26125622,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20230511-dc714da8],SizeBytes:19351145,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:23d4ae0566b98dfee53d4b7a9ef824b6ed1c6b3a8f52bab927e5521f871b5104 docker.io/aquasec/kube-bench:v0.6.10],SizeBytes:18243491,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/build-image/distroless-iptables@sha256:fc259355994e6c6c1025a7cd2d1bdbf201708e9e11ef1dfd3ef787a7ce45730d registry.k8s.io/build-image/distroless-iptables:v0.2.9],SizeBytes:9501695,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230510-486859a6],SizeBytes:3052318,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 11 18:06:27.389: INFO: Logging kubelet events for node v126-worker Apr 11 18:06:27.392: INFO: Logging pods the kubelet thinks is on node v126-worker Apr 11 18:06:27.415: INFO: create-loop-devs-qf7hw started at 2024-02-15 12:43:26 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.415: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:06:27.415: INFO: kindnet-llt78 started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.415: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:06:27.415: INFO: kube-proxy-6gjpv started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.415: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 18:06:27.681: INFO: Latency metrics for node v126-worker Apr 11 18:06:27.681: INFO: Logging node info for node v126-worker2 Apr 11 18:06:27.684: INFO: Node Info: &Node{ObjectMeta:{v126-worker2 325f688d-d472-4d00-af05-b1602ff4d011 7537708 0 2024-02-15 12:43:24 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:v126-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:v126-worker2] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2024-02-15 12:43:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2024-02-15 12:43:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2024-03-23 10:52:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2024-04-11 18:03:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {e2e.test Update v1 2024-04-11 18:06:27 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:example.com/fakecpu":{}},"f:capacity":{"f:example.com/fakecpu":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v126/v126-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412086784 0} {} 65832116Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2024-04-11 18:03:29 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2024-04-11 18:03:29 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2024-04-11 18:03:29 +0000 UTC,LastTransitionTime:2024-02-15 12:43:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2024-04-11 18:03:29 +0000 UTC,LastTransitionTime:2024-02-15 12:43:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.22.0.3,},NodeAddress{Type:Hostname,Address:v126-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9a4f500a92ab44e68eb943ba261bf2b3,SystemUUID:3a962073-037f-4c28-a122-8f4b5dfc4ca0,BootID:3ece24be-6f26-4926-9346-83e0950952a5,KernelVersion:5.15.0-53-generic,OSImage:Debian GNU/Linux 11 (bullseye),ContainerRuntimeVersion:containerd://1.7.1,KubeletVersion:v1.26.6,KubeProxyVersion:v1.26.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/litmuschaos/go-runner@sha256:b4aaa2ee36bf687dd0f147ced7dce708398fae6d8410067c9ad9a90f162d55e5 docker.io/litmuschaos/go-runner:2.14.0],SizeBytes:170207512,},ContainerImage{Names:[docker.io/sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 docker.io/sirot/netperf-latest:latest],SizeBytes:118405146,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:24aaf2626d6b27864c29de2097e8bbb840b3a414271bf7c8995e431e47d8408e registry.k8s.io/e2e-test-images/jessie-dnsutils:1.7],SizeBytes:112030336,},ContainerImage{Names:[registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/sig-storage/nfs-provisioner@sha256:e943bb77c7df05ebdc8c7888b2db289b13bf9f012d6a3a5a74f14d4d5743d439 registry.k8s.io/sig-storage/nfs-provisioner:v3.0.1],SizeBytes:90632047,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ba23b53b0e943e1556160fd3d7e445268699b578d6d1ffcce645a3cfafebb3db registry.k8s.io/kube-apiserver:v1.26.6],SizeBytes:80511487,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:719bf3e60c026520ed06d4f65a6df78f53a838e8675c058f25582d5067117d99 registry.k8s.io/kube-controller-manager:v1.26.6],SizeBytes:68657293,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:ffc23ebf13cca095f49e15bc00a2fc75fc6ae75e10104169680d9cac711339b8 registry.k8s.io/kube-proxy:v1.26.6],SizeBytes:67229690,},ContainerImage{Names:[docker.io/library/import-2023-06-15@sha256:b67a0068e2439c496b04bb021d953b966868421451aa88f2c3701c6b4ab77d4f registry.k8s.io/kube-scheduler:v1.26.6],SizeBytes:57880717,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2 registry.k8s.io/etcd:3.5.10-0],SizeBytes:56649232,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e registry.k8s.io/e2e-test-images/agnhost:2.43],SizeBytes:51706353,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nautilus@sha256:80ba6c8c44f9623f06e868a1aa66026c8ec438ad814f9ec95e9333b415fe3550 registry.k8s.io/e2e-test-images/nautilus:1.7],SizeBytes:49641698,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:3fe7acf013d1264ffded116b80a73dc129a449b0fccdb8d21af8279f2233f36e registry.k8s.io/e2e-test-images/httpd:2.4.39-4],SizeBytes:41901587,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22 registry.k8s.io/e2e-test-images/httpd:2.4.38-4],SizeBytes:40764257,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[docker.io/litmuschaos/chaos-operator@sha256:69b1a6ff1409fc80cf169503e29d10e049b46108e57436e452e3800f5f911d70 docker.io/litmuschaos/chaos-operator:2.14.0],SizeBytes:28963838,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20230511-dc714da8],SizeBytes:27731571,},ContainerImage{Names:[docker.io/litmuschaos/chaos-runner@sha256:a5fcf3f1766975ec6e4730c0aefdf9705af20c67d9ff68372168c8856acba7af docker.io/litmuschaos/chaos-runner:2.14.0],SizeBytes:26125622,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/sample-apiserver@sha256:8d70890151aa5d096f331cb9da1b9cd5be0412b7363fe67b5c3befdcaa2a28d0 registry.k8s.io/e2e-test-images/sample-apiserver:1.17.7],SizeBytes:25667066,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8 registry.k8s.io/sig-storage/csi-provisioner:v3.3.0],SizeBytes:25491225,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7 registry.k8s.io/sig-storage/csi-resizer:v1.6.0],SizeBytes:24148884,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b registry.k8s.io/sig-storage/csi-attacher:v4.0.0],SizeBytes:23847201,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v20230511-dc714da8],SizeBytes:19351145,},ContainerImage{Names:[registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5 registry.k8s.io/sig-storage/hostpathplugin:v1.9.0],SizeBytes:18758628,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf registry.k8s.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonroot@sha256:ee9f50b3c64b174d296d91ca9f69a914ac30e59095dfb462b2b518ad28a63655 registry.k8s.io/e2e-test-images/nonroot:1.4],SizeBytes:17747885,},ContainerImage{Names:[registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:db048754ae68ae337d8fa96494c96d2a1204c3320f5dcf7e8e71085adec85da6 registry.k8s.io/e2e-test-images/nginx:1.15-4],SizeBytes:7000082,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nginx@sha256:5c99cf6a02adda929b10321dbf4ecfa00d87be9ba4fb456006237d530ab4baa1 registry.k8s.io/e2e-test-images/nginx:1.14-4],SizeBytes:6978614,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac registry.k8s.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20230510-486859a6],SizeBytes:3052318,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf registry.k8s.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937 registry.k8s.io/e2e-test-images/busybox:1.29-4],SizeBytes:731990,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause:3.9],SizeBytes:321520,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Apr 11 18:06:27.684: INFO: Logging kubelet events for node v126-worker2 Apr 11 18:06:27.687: INFO: Logging pods the kubelet thinks is on node v126-worker2 Apr 11 18:06:27.711: INFO: create-loop-devs-tmv9n started at 2024-02-15 12:43:26 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.712: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:06:27.712: INFO: kube-proxy-zhx9l started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.712: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 18:06:27.712: INFO: pod1 started at 2024-04-11 18:06:18 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.712: INFO: Container agnhost ready: true, restart count 0 Apr 11 18:06:27.712: INFO: kindnet-l6j8p started at 2024-02-15 12:43:25 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.712: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:06:27.712: INFO: pod2 started at 2024-04-11 18:06:20 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.712: INFO: Container agnhost ready: true, restart count 0 Apr 11 18:06:27.712: INFO: pod3 started at 2024-04-11 18:06:22 +0000 UTC (0+1 container statuses recorded) Apr 11 18:06:27.712: INFO: Container agnhost ready: true, restart count 0 Apr 11 18:06:28.042: INFO: Latency metrics for node v126-worker2 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-7740" for this suite. 04/11/24 18:06:28.042 << End Captured GinkgoWriter Output Apr 11 18:06:27.248: failed to create RuntimeClass resource: runtimeclasses.node.k8s.io "test-handler" already exists In [BeforeEach] at: test/e2e/scheduling/predicates.go:253 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms test/e2e/scheduling/priorities.go:124 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:06:28.061 Apr 11 18:06:28.061: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 04/11/24 18:06:28.063 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:06:28.074 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:06:28.078 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 Apr 11 18:06:28.082: INFO: Waiting up to 1m0s for all nodes to be ready Apr 11 18:07:28.109: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:07:28.113: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 11 18:07:28.126: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 11 18:07:28.126: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 11 18:07:28.133: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 11 18:07:28.133: INFO: Pod for on the node: create-loop-devs-tmv9n, Cpu: 100, Mem: 209715200 Apr 11 18:07:28.133: INFO: Pod for on the node: kindnet-l6j8p, Cpu: 100, Mem: 52428800 Apr 11 18:07:28.133: INFO: Pod for on the node: kube-proxy-zhx9l, Cpu: 100, Mem: 209715200 Apr 11 18:07:28.133: INFO: Node: v126-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 11 18:07:28.133: INFO: Node: v126-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms test/e2e/scheduling/priorities.go:124 Apr 11 18:07:28.133: INFO: Requires at least 2 nodes (not 1) [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:07:28.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-priority-4626" for this suite. 04/11/24 18:07:28.138 ------------------------------ S [SKIPPED] [60.082 seconds] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/framework.go:40 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms test/e2e/scheduling/priorities.go:124 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:06:28.061 Apr 11 18:06:28.061: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-priority 04/11/24 18:06:28.063 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:06:28.074 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:06:28.078 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:99 Apr 11 18:06:28.082: INFO: Waiting up to 1m0s for all nodes to be ready Apr 11 18:07:28.109: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:07:28.113: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 11 18:07:28.126: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 11 18:07:28.126: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 11 18:07:28.133: INFO: ComputeCPUMemFraction for node: v126-worker2 Apr 11 18:07:28.133: INFO: Pod for on the node: create-loop-devs-tmv9n, Cpu: 100, Mem: 209715200 Apr 11 18:07:28.133: INFO: Pod for on the node: kindnet-l6j8p, Cpu: 100, Mem: 52428800 Apr 11 18:07:28.133: INFO: Pod for on the node: kube-proxy-zhx9l, Cpu: 100, Mem: 209715200 Apr 11 18:07:28.133: INFO: Node: v126-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 88000, cpuFraction: 0.0022727272727272726 Apr 11 18:07:28.133: INFO: Node: v126-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 67412086784, memFraction: 0.0023332077006304945 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms test/e2e/scheduling/priorities.go:124 Apr 11 18:07:28.133: INFO: Requires at least 2 nodes (not 1) [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:07:28.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/scheduling/priorities.go:96 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPriorities [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-priority-4626" for this suite. 04/11/24 18:07:28.138 << End Captured GinkgoWriter Output Requires at least 2 nodes (not 1) In [It] at: test/e2e/scheduling/priorities.go:126 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching test/e2e/scheduling/predicates.go:630 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:07:28.175 Apr 11 18:07:28.175: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 18:07:28.177 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:07:28.189 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:07:28.193 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 18:07:28.197: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 18:07:28.206: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:07:28.209: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 18:07:28.215: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 18:07:28.215: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:07:28.215: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:07:28.216: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:07:28.216: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:07:28.216: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching test/e2e/scheduling/predicates.go:630 STEP: Trying to launch a pod without a toleration to get a node which can launch it. 04/11/24 18:07:28.216 Apr 11 18:07:28.224: INFO: Waiting up to 1m0s for pod "without-toleration" in namespace "sched-pred-3266" to be "running" Apr 11 18:07:28.227: INFO: Pod "without-toleration": Phase="Pending", Reason="", readiness=false. Elapsed: 3.060582ms Apr 11 18:07:30.230: INFO: Pod "without-toleration": Phase="Running", Reason="", readiness=true. Elapsed: 2.00683754s Apr 11 18:07:30.231: INFO: Pod "without-toleration" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/11/24 18:07:30.234 STEP: Trying to apply a random taint on the found node. 04/11/24 18:07:30.242 STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-7d73ee6d-6a85-4475-a737-90dc60579d3d=testing-taint-value:NoSchedule 04/11/24 18:07:30.258 STEP: Trying to apply a random label on the found node. 04/11/24 18:07:30.262 STEP: verifying the node has the label kubernetes.io/e2e-label-key-b241a245-e4d9-4329-824a-7b46f39c6819 testing-label-value 04/11/24 18:07:30.276 STEP: Trying to relaunch the pod, still no tolerations. 04/11/24 18:07:30.279 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c54c0c1f093e4b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3266/without-toleration to v126-worker2] 04/11/24 18:07:30.283 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c54c0c449062fa], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:07:30.283 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c54c0c458e367e], Reason = [Created], Message = [Created container without-toleration] 04/11/24 18:07:30.284 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c54c0c54733b99], Reason = [Started], Message = [Started container without-toleration] 04/11/24 18:07:30.284 STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.17c54c0c99f4f362], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {kubernetes.io/e2e-taint-key-7d73ee6d-6a85-4475-a737-90dc60579d3d: testing-taint-value}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1 node(s) were unschedulable. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 04/11/24 18:07:30.294 STEP: Removing taint off the node 04/11/24 18:07:31.295 STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.17c54c0c99f4f362], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {kubernetes.io/e2e-taint-key-7d73ee6d-6a85-4475-a737-90dc60579d3d: testing-taint-value}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1 node(s) were unschedulable. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 04/11/24 18:07:31.299 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c54c0c1f093e4b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3266/without-toleration to v126-worker2] 04/11/24 18:07:31.3 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c54c0c449062fa], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:07:31.3 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c54c0c458e367e], Reason = [Created], Message = [Created container without-toleration] 04/11/24 18:07:31.3 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c54c0c54733b99], Reason = [Started], Message = [Started container without-toleration] 04/11/24 18:07:31.3 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-7d73ee6d-6a85-4475-a737-90dc60579d3d=testing-taint-value:NoSchedule 04/11/24 18:07:31.319 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17c54c0cd793c131], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3266/still-no-tolerations to v126-worker2] 04/11/24 18:07:31.328 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17c54c0cf9c9008e], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:07:31.906 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17c54c0cfac10711], Reason = [Created], Message = [Created container still-no-tolerations] 04/11/24 18:07:31.918 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c54c0d074715ae], Reason = [Killing], Message = [Stopping container without-toleration] 04/11/24 18:07:32.129 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17c54c0d094ad1b1], Reason = [Started], Message = [Started container still-no-tolerations] 04/11/24 18:07:32.162 STEP: removing the label kubernetes.io/e2e-label-key-b241a245-e4d9-4329-824a-7b46f39c6819 off the node v126-worker2 04/11/24 18:07:32.327 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-b241a245-e4d9-4329-824a-7b46f39c6819 04/11/24 18:07:32.342 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-7d73ee6d-6a85-4475-a737-90dc60579d3d=testing-taint-value:NoSchedule 04/11/24 18:07:32.347 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:07:32.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-3266" for this suite. 04/11/24 18:07:32.355 ------------------------------ • [4.184 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching test/e2e/scheduling/predicates.go:630 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:07:28.175 Apr 11 18:07:28.175: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 18:07:28.177 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:07:28.189 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:07:28.193 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 18:07:28.197: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 18:07:28.206: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:07:28.209: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 18:07:28.215: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 18:07:28.215: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:07:28.215: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:07:28.216: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:07:28.216: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:07:28.216: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching test/e2e/scheduling/predicates.go:630 STEP: Trying to launch a pod without a toleration to get a node which can launch it. 04/11/24 18:07:28.216 Apr 11 18:07:28.224: INFO: Waiting up to 1m0s for pod "without-toleration" in namespace "sched-pred-3266" to be "running" Apr 11 18:07:28.227: INFO: Pod "without-toleration": Phase="Pending", Reason="", readiness=false. Elapsed: 3.060582ms Apr 11 18:07:30.230: INFO: Pod "without-toleration": Phase="Running", Reason="", readiness=true. Elapsed: 2.00683754s Apr 11 18:07:30.231: INFO: Pod "without-toleration" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/11/24 18:07:30.234 STEP: Trying to apply a random taint on the found node. 04/11/24 18:07:30.242 STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-7d73ee6d-6a85-4475-a737-90dc60579d3d=testing-taint-value:NoSchedule 04/11/24 18:07:30.258 STEP: Trying to apply a random label on the found node. 04/11/24 18:07:30.262 STEP: verifying the node has the label kubernetes.io/e2e-label-key-b241a245-e4d9-4329-824a-7b46f39c6819 testing-label-value 04/11/24 18:07:30.276 STEP: Trying to relaunch the pod, still no tolerations. 04/11/24 18:07:30.279 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c54c0c1f093e4b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3266/without-toleration to v126-worker2] 04/11/24 18:07:30.283 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c54c0c449062fa], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:07:30.283 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c54c0c458e367e], Reason = [Created], Message = [Created container without-toleration] 04/11/24 18:07:30.284 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c54c0c54733b99], Reason = [Started], Message = [Started container without-toleration] 04/11/24 18:07:30.284 STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.17c54c0c99f4f362], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {kubernetes.io/e2e-taint-key-7d73ee6d-6a85-4475-a737-90dc60579d3d: testing-taint-value}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1 node(s) were unschedulable. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 04/11/24 18:07:30.294 STEP: Removing taint off the node 04/11/24 18:07:31.295 STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.17c54c0c99f4f362], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {kubernetes.io/e2e-taint-key-7d73ee6d-6a85-4475-a737-90dc60579d3d: testing-taint-value}, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1 node(s) were unschedulable. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 04/11/24 18:07:31.299 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c54c0c1f093e4b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3266/without-toleration to v126-worker2] 04/11/24 18:07:31.3 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c54c0c449062fa], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:07:31.3 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c54c0c458e367e], Reason = [Created], Message = [Created container without-toleration] 04/11/24 18:07:31.3 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c54c0c54733b99], Reason = [Started], Message = [Started container without-toleration] 04/11/24 18:07:31.3 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-7d73ee6d-6a85-4475-a737-90dc60579d3d=testing-taint-value:NoSchedule 04/11/24 18:07:31.319 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17c54c0cd793c131], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3266/still-no-tolerations to v126-worker2] 04/11/24 18:07:31.328 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17c54c0cf9c9008e], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/11/24 18:07:31.906 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17c54c0cfac10711], Reason = [Created], Message = [Created container still-no-tolerations] 04/11/24 18:07:31.918 STEP: Considering event: Type = [Normal], Name = [without-toleration.17c54c0d074715ae], Reason = [Killing], Message = [Stopping container without-toleration] 04/11/24 18:07:32.129 STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.17c54c0d094ad1b1], Reason = [Started], Message = [Started container still-no-tolerations] 04/11/24 18:07:32.162 STEP: removing the label kubernetes.io/e2e-label-key-b241a245-e4d9-4329-824a-7b46f39c6819 off the node v126-worker2 04/11/24 18:07:32.327 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-b241a245-e4d9-4329-824a-7b46f39c6819 04/11/24 18:07:32.342 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-7d73ee6d-6a85-4475-a737-90dc60579d3d=testing-taint-value:NoSchedule 04/11/24 18:07:32.347 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:07:32.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-3266" for this suite. 04/11/24 18:07:32.355 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching test/e2e/scheduling/predicates.go:498 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:07:32.374 Apr 11 18:07:32.374: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 18:07:32.375 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:07:32.385 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:07:32.389 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 18:07:32.392: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 18:07:32.400: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:07:32.403: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 18:07:32.409: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 18:07:32.409: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:07:32.409: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:07:32.409: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:07:32.409: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:07:32.409: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 18:07:32.409: INFO: still-no-tolerations from sched-pred-3266 started at 2024-04-11 18:07:31 +0000 UTC (1 container statuses recorded) Apr 11 18:07:32.409: INFO: Container still-no-tolerations ready: false, restart count 0 [It] validates that NodeAffinity is respected if not matching test/e2e/scheduling/predicates.go:498 STEP: Trying to schedule Pod with nonempty NodeSelector. 04/11/24 18:07:32.409 STEP: Considering event: Type = [Warning], Name = [restricted-pod.17c54c0d1956eb1b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1 node(s) were unschedulable. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 04/11/24 18:07:32.431 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:07:33.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-3498" for this suite. 04/11/24 18:07:33.437 ------------------------------ • [1.068 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that NodeAffinity is respected if not matching test/e2e/scheduling/predicates.go:498 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/11/24 18:07:32.374 Apr 11 18:07:32.374: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/11/24 18:07:32.375 STEP: Waiting for a default service account to be provisioned in namespace 04/11/24 18:07:32.385 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/11/24 18:07:32.389 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 11 18:07:32.392: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 11 18:07:32.400: INFO: Waiting for terminating namespaces to be deleted... Apr 11 18:07:32.403: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 11 18:07:32.409: INFO: create-loop-devs-tmv9n from kube-system started at 2024-02-15 12:43:26 +0000 UTC (1 container statuses recorded) Apr 11 18:07:32.409: INFO: Container loopdev ready: true, restart count 0 Apr 11 18:07:32.409: INFO: kindnet-l6j8p from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:07:32.409: INFO: Container kindnet-cni ready: true, restart count 0 Apr 11 18:07:32.409: INFO: kube-proxy-zhx9l from kube-system started at 2024-02-15 12:43:25 +0000 UTC (1 container statuses recorded) Apr 11 18:07:32.409: INFO: Container kube-proxy ready: true, restart count 0 Apr 11 18:07:32.409: INFO: still-no-tolerations from sched-pred-3266 started at 2024-04-11 18:07:31 +0000 UTC (1 container statuses recorded) Apr 11 18:07:32.409: INFO: Container still-no-tolerations ready: false, restart count 0 [It] validates that NodeAffinity is respected if not matching test/e2e/scheduling/predicates.go:498 STEP: Trying to schedule Pod with nonempty NodeSelector. 04/11/24 18:07:32.409 STEP: Considering event: Type = [Warning], Name = [restricted-pod.17c54c0d1956eb1b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 1 node(s) were unschedulable. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 04/11/24 18:07:32.431 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 11 18:07:33.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-3498" for this suite. 04/11/24 18:07:33.437 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [SynchronizedAfterSuite] test/e2e/e2e.go:88 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 Apr 11 18:07:33.448: INFO: Running AfterSuite actions on node 1 Apr 11 18:07:33.448: INFO: Skipping dumping logs from cluster ------------------------------ [SynchronizedAfterSuite] PASSED [0.000 seconds] [SynchronizedAfterSuite] test/e2e/e2e.go:88 Begin Captured GinkgoWriter Output >> [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 Apr 11 18:07:33.448: INFO: Running AfterSuite actions on node 1 Apr 11 18:07:33.448: INFO: Skipping dumping logs from cluster << End Captured GinkgoWriter Output ------------------------------ [ReportAfterSuite] Kubernetes e2e suite report test/e2e/e2e_test.go:153 [ReportAfterSuite] TOP-LEVEL test/e2e/e2e_test.go:153 ------------------------------ [ReportAfterSuite] PASSED [0.000 seconds] [ReportAfterSuite] Kubernetes e2e suite report test/e2e/e2e_test.go:153 Begin Captured GinkgoWriter Output >> [ReportAfterSuite] TOP-LEVEL test/e2e/e2e_test.go:153 << End Captured GinkgoWriter Output ------------------------------ [ReportAfterSuite] Kubernetes e2e JUnit report test/e2e/framework/test_context.go:529 [ReportAfterSuite] TOP-LEVEL test/e2e/framework/test_context.go:529 ------------------------------ [ReportAfterSuite] PASSED [0.235 seconds] [ReportAfterSuite] Kubernetes e2e JUnit report test/e2e/framework/test_context.go:529 Begin Captured GinkgoWriter Output >> [ReportAfterSuite] TOP-LEVEL test/e2e/framework/test_context.go:529 << End Captured GinkgoWriter Output ------------------------------ Summarizing 1 Failure: [FAIL] [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run [BeforeEach] verify pod overhead is accounted for test/e2e/scheduling/predicates.go:253 Ran 9 of 7069 Specs in 351.012 seconds FAIL! -- 8 Passed | 1 Failed | 0 Pending | 7060 Skipped --- FAIL: TestE2E (351.45s) FAIL Ginkgo ran 1 suite in 5m51.575543129s Test Suite Failed