I0418 16:46:18.696328 17 e2e.go:126] Starting e2e run "589c8b1f-48ba-4a40-ae30-91bb4f146584" on Ginkgo node 1 Apr 18 16:46:18.712: INFO: Enabling in-tree volume drivers Running Suite: Kubernetes e2e suite - /usr/local/bin ==================================================== Random Seed: 1713458778 - will randomize all specs Will run 23 of 7069 specs ------------------------------ [SynchronizedBeforeSuite] test/e2e/e2e.go:77 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 18 16:46:18.874: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 16:46:18.876: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 18 16:46:18.904: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 18 16:46:18.936: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 18 16:46:18.937: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 18 16:46:18.937: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 18 16:46:18.942: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Apr 18 16:46:18.942: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 18 16:46:18.942: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 18 16:46:18.942: INFO: e2e test version: v1.26.13 Apr 18 16:46:18.944: INFO: kube-apiserver version: v1.26.6 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 18 16:46:18.944: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 16:46:18.949: INFO: Cluster IP family: ipv4 ------------------------------ [SynchronizedBeforeSuite] PASSED [0.075 seconds] [SynchronizedBeforeSuite] test/e2e/e2e.go:77 Begin Captured GinkgoWriter Output >> [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 18 16:46:18.874: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 16:46:18.876: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 18 16:46:18.904: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 18 16:46:18.936: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 18 16:46:18.937: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 18 16:46:18.937: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 18 16:46:18.942: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Apr 18 16:46:18.942: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 18 16:46:18.942: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 18 16:46:18.942: INFO: e2e test version: v1.26.13 Apr 18 16:46:18.944: INFO: kube-apiserver version: v1.26.6 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Apr 18 16:46:18.944: INFO: >>> kubeConfig: /home/xtesting/.kube/config Apr 18 16:46:18.949: INFO: Cluster IP family: ipv4 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/scheduling/preemption.go:814 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:46:18.997 Apr 18 16:46:18.997: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/18/24 16:46:18.999 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:46:19.009 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:46:19.012 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 18 16:46:19.027: INFO: Waiting up to 1m0s for all nodes to be ready Apr 18 16:47:19.053: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:47:19.056 Apr 18 16:47:19.056: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption-path 04/18/24 16:47:19.058 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:47:19.071 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:47:19.075 [BeforeEach] PriorityClass endpoints test/e2e/framework/metrics/init/init.go:31 [BeforeEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:771 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/scheduling/preemption.go:814 Apr 18 16:47:19.095: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. Apr 18 16:47:19.098: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints test/e2e/framework/node/init/init.go:32 Apr 18 16:47:19.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:787 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:47:19.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] PriorityClass endpoints test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] PriorityClass endpoints dump namespaces | framework.go:196 [DeferCleanup (Each)] PriorityClass endpoints tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-path-4876" for this suite. 04/18/24 16:47:19.165 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-6264" for this suite. 04/18/24 16:47:19.171 ------------------------------ • [SLOW TEST] [60.178 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 PriorityClass endpoints test/e2e/scheduling/preemption.go:764 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/scheduling/preemption.go:814 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:46:18.997 Apr 18 16:46:18.997: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/18/24 16:46:18.999 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:46:19.009 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:46:19.012 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 18 16:46:19.027: INFO: Waiting up to 1m0s for all nodes to be ready Apr 18 16:47:19.053: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:47:19.056 Apr 18 16:47:19.056: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption-path 04/18/24 16:47:19.058 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:47:19.071 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:47:19.075 [BeforeEach] PriorityClass endpoints test/e2e/framework/metrics/init/init.go:31 [BeforeEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:771 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/scheduling/preemption.go:814 Apr 18 16:47:19.095: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. Apr 18 16:47:19.098: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints test/e2e/framework/node/init/init.go:32 Apr 18 16:47:19.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:787 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:47:19.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] PriorityClass endpoints test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] PriorityClass endpoints dump namespaces | framework.go:196 [DeferCleanup (Each)] PriorityClass endpoints tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-path-4876" for this suite. 04/18/24 16:47:19.165 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-6264" for this suite. 04/18/24 16:47:19.171 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance] test/e2e/apps/daemon_set.go:834 [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:47:19.179 Apr 18 16:47:19.179: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/18/24 16:47:19.181 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:47:19.192 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:47:19.196 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should list and delete a collection of DaemonSets [Conformance] test/e2e/apps/daemon_set.go:834 STEP: Creating simple DaemonSet "daemon-set" 04/18/24 16:47:19.219 STEP: Check that daemon pods launch on every node of the cluster. 04/18/24 16:47:19.226 Apr 18 16:47:19.230: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:47:19.232: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:47:19.233: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:47:20.237: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:47:20.241: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:47:20.241: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:47:21.238: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:47:21.241: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 18 16:47:21.241: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: listing all DeamonSets 04/18/24 16:47:21.244 STEP: DeleteCollection of the DaemonSets 04/18/24 16:47:21.248 STEP: Verify that ReplicaSets have been deleted 04/18/24 16:47:21.253 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 Apr 18 16:47:21.266: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"47868"},"items":null} Apr 18 16:47:21.270: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"47868"},"items":[{"metadata":{"name":"daemon-set-hncdx","generateName":"daemon-set-","namespace":"daemonsets-7693","uid":"87fbd60c-b394-4565-ba98-5cd627da2839","resourceVersion":"47864","creationTimestamp":"2024-04-18T16:47:19Z","labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"2b89a3dd-3d75-41e8-9347-6836f34342a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-18T16:47:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2b89a3dd-3d75-41e8-9347-6836f34342a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-18T16:47:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.92\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-9dx2n","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-9dx2n","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"v126-worker2","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["v126-worker2"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-18T16:47:19Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-18T16:47:20Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-18T16:47:20Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-18T16:47:19Z"}],"hostIP":"172.22.0.3","podIP":"10.244.2.92","podIPs":[{"ip":"10.244.2.92"}],"startTime":"2024-04-18T16:47:19Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2024-04-18T16:47:20Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"containerd://64de7ede67bdedf7a55641141880908dbcdc1c432bfa40fa7f0c26aedb1ad5bd","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-xbx2r","generateName":"daemon-set-","namespace":"daemonsets-7693","uid":"02340003-c724-498a-ac8d-6ff428800e20","resourceVersion":"47866","creationTimestamp":"2024-04-18T16:47:19Z","labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"2b89a3dd-3d75-41e8-9347-6836f34342a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-18T16:47:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2b89a3dd-3d75-41e8-9347-6836f34342a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-18T16:47:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.112\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-tskq7","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-tskq7","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"v126-worker","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["v126-worker"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-18T16:47:19Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-18T16:47:20Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-18T16:47:20Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-18T16:47:19Z"}],"hostIP":"172.22.0.4","podIP":"10.244.1.112","podIPs":[{"ip":"10.244.1.112"}],"startTime":"2024-04-18T16:47:19Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2024-04-18T16:47:20Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"containerd://78c855ed1623df84734e56a8ff4108def5f7e50385d965474beed66eb028b523","started":true}],"qosClass":"BestEffort"}}]} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:47:21.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-7693" for this suite. 04/18/24 16:47:21.283 ------------------------------ • [2.109 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should list and delete a collection of DaemonSets [Conformance] test/e2e/apps/daemon_set.go:834 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:47:19.179 Apr 18 16:47:19.179: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/18/24 16:47:19.181 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:47:19.192 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:47:19.196 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should list and delete a collection of DaemonSets [Conformance] test/e2e/apps/daemon_set.go:834 STEP: Creating simple DaemonSet "daemon-set" 04/18/24 16:47:19.219 STEP: Check that daemon pods launch on every node of the cluster. 04/18/24 16:47:19.226 Apr 18 16:47:19.230: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:47:19.232: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:47:19.233: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:47:20.237: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:47:20.241: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:47:20.241: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:47:21.238: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:47:21.241: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 18 16:47:21.241: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: listing all DeamonSets 04/18/24 16:47:21.244 STEP: DeleteCollection of the DaemonSets 04/18/24 16:47:21.248 STEP: Verify that ReplicaSets have been deleted 04/18/24 16:47:21.253 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 Apr 18 16:47:21.266: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"47868"},"items":null} Apr 18 16:47:21.270: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"47868"},"items":[{"metadata":{"name":"daemon-set-hncdx","generateName":"daemon-set-","namespace":"daemonsets-7693","uid":"87fbd60c-b394-4565-ba98-5cd627da2839","resourceVersion":"47864","creationTimestamp":"2024-04-18T16:47:19Z","labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"2b89a3dd-3d75-41e8-9347-6836f34342a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-18T16:47:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2b89a3dd-3d75-41e8-9347-6836f34342a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-18T16:47:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.92\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-9dx2n","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-9dx2n","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"v126-worker2","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["v126-worker2"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-18T16:47:19Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-18T16:47:20Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-18T16:47:20Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-18T16:47:19Z"}],"hostIP":"172.22.0.3","podIP":"10.244.2.92","podIPs":[{"ip":"10.244.2.92"}],"startTime":"2024-04-18T16:47:19Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2024-04-18T16:47:20Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"containerd://64de7ede67bdedf7a55641141880908dbcdc1c432bfa40fa7f0c26aedb1ad5bd","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-xbx2r","generateName":"daemon-set-","namespace":"daemonsets-7693","uid":"02340003-c724-498a-ac8d-6ff428800e20","resourceVersion":"47866","creationTimestamp":"2024-04-18T16:47:19Z","labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"2b89a3dd-3d75-41e8-9347-6836f34342a5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-04-18T16:47:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2b89a3dd-3d75-41e8-9347-6836f34342a5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-04-18T16:47:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.112\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-tskq7","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-tskq7","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"v126-worker","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["v126-worker"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-18T16:47:19Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-18T16:47:20Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-18T16:47:20Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2024-04-18T16:47:19Z"}],"hostIP":"172.22.0.4","podIP":"10.244.1.112","podIPs":[{"ip":"10.244.1.112"}],"startTime":"2024-04-18T16:47:19Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2024-04-18T16:47:20Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"containerd://78c855ed1623df84734e56a8ff4108def5f7e50385d965474beed66eb028b523","started":true}],"qosClass":"BestEffort"}}]} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:47:21.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-7693" for this suite. 04/18/24 16:47:21.283 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] test/e2e/scheduling/preemption.go:224 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:47:21.3 Apr 18 16:47:21.300: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/18/24 16:47:21.302 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:47:21.313 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:47:21.316 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 18 16:47:21.331: INFO: Waiting up to 1m0s for all nodes to be ready Apr 18 16:48:21.355: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] test/e2e/scheduling/preemption.go:224 STEP: Create pods that use 4/5 of node resources. 04/18/24 16:48:21.358 Apr 18 16:48:21.379: INFO: Created pod: pod0-0-sched-preemption-low-priority Apr 18 16:48:21.383: INFO: Created pod: pod0-1-sched-preemption-medium-priority Apr 18 16:48:21.396: INFO: Created pod: pod1-0-sched-preemption-medium-priority Apr 18 16:48:21.400: INFO: Created pod: pod1-1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. 04/18/24 16:48:21.4 Apr 18 16:48:21.400: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-7715" to be "running" Apr 18 16:48:21.403: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.881629ms Apr 18 16:48:23.408: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.007598566s Apr 18 16:48:23.408: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Apr 18 16:48:23.408: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-7715" to be "running" Apr 18 16:48:23.411: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.965834ms Apr 18 16:48:23.411: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Apr 18 16:48:23.411: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-7715" to be "running" Apr 18 16:48:23.414: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 3.001193ms Apr 18 16:48:23.414: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Apr 18 16:48:23.414: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-7715" to be "running" Apr 18 16:48:23.417: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.903852ms Apr 18 16:48:23.417: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" STEP: Run a critical pod that use same resources as that of a lower priority pod 04/18/24 16:48:23.417 Apr 18 16:48:23.427: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" Apr 18 16:48:23.430: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.718324ms Apr 18 16:48:25.435: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007075451s Apr 18 16:48:27.434: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00665697s Apr 18 16:48:29.434: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.006856754s Apr 18 16:48:29.434: INFO: Pod "critical-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:48:29.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-7715" for this suite. 04/18/24 16:48:29.502 ------------------------------ • [SLOW TEST] [68.207 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] test/e2e/scheduling/preemption.go:224 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:47:21.3 Apr 18 16:47:21.300: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/18/24 16:47:21.302 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:47:21.313 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:47:21.316 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 18 16:47:21.331: INFO: Waiting up to 1m0s for all nodes to be ready Apr 18 16:48:21.355: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] test/e2e/scheduling/preemption.go:224 STEP: Create pods that use 4/5 of node resources. 04/18/24 16:48:21.358 Apr 18 16:48:21.379: INFO: Created pod: pod0-0-sched-preemption-low-priority Apr 18 16:48:21.383: INFO: Created pod: pod0-1-sched-preemption-medium-priority Apr 18 16:48:21.396: INFO: Created pod: pod1-0-sched-preemption-medium-priority Apr 18 16:48:21.400: INFO: Created pod: pod1-1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. 04/18/24 16:48:21.4 Apr 18 16:48:21.400: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-7715" to be "running" Apr 18 16:48:21.403: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.881629ms Apr 18 16:48:23.408: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.007598566s Apr 18 16:48:23.408: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Apr 18 16:48:23.408: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-7715" to be "running" Apr 18 16:48:23.411: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.965834ms Apr 18 16:48:23.411: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Apr 18 16:48:23.411: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-7715" to be "running" Apr 18 16:48:23.414: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 3.001193ms Apr 18 16:48:23.414: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Apr 18 16:48:23.414: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-7715" to be "running" Apr 18 16:48:23.417: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.903852ms Apr 18 16:48:23.417: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" STEP: Run a critical pod that use same resources as that of a lower priority pod 04/18/24 16:48:23.417 Apr 18 16:48:23.427: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" Apr 18 16:48:23.430: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.718324ms Apr 18 16:48:25.435: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007075451s Apr 18 16:48:27.434: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00665697s Apr 18 16:48:29.434: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.006856754s Apr 18 16:48:29.434: INFO: Pod "critical-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:48:29.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-7715" for this suite. 04/18/24 16:48:29.502 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] test/e2e/scheduling/preemption.go:624 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:48:29.517 Apr 18 16:48:29.518: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/18/24 16:48:29.519 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:48:29.53 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:48:29.534 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 18 16:48:29.550: INFO: Waiting up to 1m0s for all nodes to be ready Apr 18 16:49:29.575: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:49:29.579 Apr 18 16:49:29.579: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption-path 04/18/24 16:49:29.581 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:49:29.592 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:49:29.596 [BeforeEach] PreemptionExecutionPath test/e2e/framework/metrics/init/init.go:31 [BeforeEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:576 STEP: Finding an available node 04/18/24 16:49:29.6 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 16:49:29.601 Apr 18 16:49:29.608: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-path-8395" to be "running" Apr 18 16:49:29.611: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.909207ms Apr 18 16:49:31.615: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007360861s Apr 18 16:49:31.615: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 16:49:31.618 Apr 18 16:49:31.628: INFO: found a healthy node: v126-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] test/e2e/scheduling/preemption.go:624 Apr 18 16:49:37.695: INFO: pods created so far: [1 1 1] Apr 18 16:49:37.695: INFO: length of pods created so far: 3 Apr 18 16:49:39.703: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath test/e2e/framework/node/init/init.go:32 Apr 18 16:49:46.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:549 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:49:46.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] PreemptionExecutionPath test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] PreemptionExecutionPath dump namespaces | framework.go:196 [DeferCleanup (Each)] PreemptionExecutionPath tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-path-8395" for this suite. 04/18/24 16:49:46.772 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-2140" for this suite. 04/18/24 16:49:46.777 ------------------------------ • [SLOW TEST] [77.264 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 PreemptionExecutionPath test/e2e/scheduling/preemption.go:537 runs ReplicaSets to verify preemption running path [Conformance] test/e2e/scheduling/preemption.go:624 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:48:29.517 Apr 18 16:48:29.518: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/18/24 16:48:29.519 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:48:29.53 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:48:29.534 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 18 16:48:29.550: INFO: Waiting up to 1m0s for all nodes to be ready Apr 18 16:49:29.575: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:49:29.579 Apr 18 16:49:29.579: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption-path 04/18/24 16:49:29.581 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:49:29.592 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:49:29.596 [BeforeEach] PreemptionExecutionPath test/e2e/framework/metrics/init/init.go:31 [BeforeEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:576 STEP: Finding an available node 04/18/24 16:49:29.6 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 16:49:29.601 Apr 18 16:49:29.608: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-path-8395" to be "running" Apr 18 16:49:29.611: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.909207ms Apr 18 16:49:31.615: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007360861s Apr 18 16:49:31.615: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 16:49:31.618 Apr 18 16:49:31.628: INFO: found a healthy node: v126-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] test/e2e/scheduling/preemption.go:624 Apr 18 16:49:37.695: INFO: pods created so far: [1 1 1] Apr 18 16:49:37.695: INFO: length of pods created so far: 3 Apr 18 16:49:39.703: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath test/e2e/framework/node/init/init.go:32 Apr 18 16:49:46.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:549 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:49:46.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] PreemptionExecutionPath test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] PreemptionExecutionPath dump namespaces | framework.go:196 [DeferCleanup (Each)] PreemptionExecutionPath tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-path-8395" for this suite. 04/18/24 16:49:46.772 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-2140" for this suite. 04/18/24 16:49:46.777 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:251 [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:49:46.784 Apr 18 16:49:46.785: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/18/24 16:49:46.786 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:49:46.796 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:49:46.799 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:251 STEP: Creating a test namespace 04/18/24 16:49:46.803 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:49:46.812 STEP: Creating a service in the namespace 04/18/24 16:49:46.815 STEP: Deleting the namespace 04/18/24 16:49:46.823 STEP: Waiting for the namespace to be removed. 04/18/24 16:49:46.827 STEP: Recreating the namespace 04/18/24 16:49:52.83 STEP: Verifying there is no service in the namespace 04/18/24 16:49:52.842 [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:49:52.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-1979" for this suite. 04/18/24 16:49:52.85 STEP: Destroying namespace "nsdeletetest-5165" for this suite. 04/18/24 16:49:52.855 Apr 18 16:49:52.858: INFO: Namespace nsdeletetest-5165 was already deleted STEP: Destroying namespace "nsdeletetest-3807" for this suite. 04/18/24 16:49:52.858 ------------------------------ • [SLOW TEST] [6.079 seconds] [sig-api-machinery] Namespaces [Serial] test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:251 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:49:46.784 Apr 18 16:49:46.785: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/18/24 16:49:46.786 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:49:46.796 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:49:46.799 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:251 STEP: Creating a test namespace 04/18/24 16:49:46.803 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:49:46.812 STEP: Creating a service in the namespace 04/18/24 16:49:46.815 STEP: Deleting the namespace 04/18/24 16:49:46.823 STEP: Waiting for the namespace to be removed. 04/18/24 16:49:46.827 STEP: Recreating the namespace 04/18/24 16:49:52.83 STEP: Verifying there is no service in the namespace 04/18/24 16:49:52.842 [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:49:52.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-1979" for this suite. 04/18/24 16:49:52.85 STEP: Destroying namespace "nsdeletetest-5165" for this suite. 04/18/24 16:49:52.855 Apr 18 16:49:52.858: INFO: Namespace nsdeletetest-5165 was already deleted STEP: Destroying namespace "nsdeletetest-3807" for this suite. 04/18/24 16:49:52.858 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] test/e2e/scheduling/predicates.go:466 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:49:52.865 Apr 18 16:49:52.866: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 16:49:52.867 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:49:52.878 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:49:52.882 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 16:49:52.886: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 16:49:52.895: INFO: Waiting for terminating namespaces to be deleted... Apr 18 16:49:52.899: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 16:49:52.905: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 16:49:52.906: INFO: Container loopdev ready: true, restart count 0 Apr 18 16:49:52.906: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:49:52.906: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 16:49:52.906: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:49:52.906: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 16:49:52.906: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 16:49:52.912: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 16:49:52.912: INFO: Container loopdev ready: true, restart count 0 Apr 18 16:49:52.912: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:49:52.912: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 16:49:52.912: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:49:52.912: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 16:49:52.912: INFO: pod4 from sched-preemption-path-8395 started at 2024-04-18 16:49:39 +0000 UTC (1 container statuses recorded) Apr 18 16:49:52.912: INFO: Container pod4 ready: true, restart count 0 Apr 18 16:49:52.912: INFO: rs-pod3-m7jss from sched-preemption-path-8395 started at 2024-04-18 16:49:35 +0000 UTC (1 container statuses recorded) Apr 18 16:49:52.912: INFO: Container pod3 ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] test/e2e/scheduling/predicates.go:466 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 16:49:52.912 Apr 18 16:49:52.921: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-7304" to be "running" Apr 18 16:49:52.924: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.141727ms Apr 18 16:49:54.929: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007519302s Apr 18 16:49:56.930: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.008241878s Apr 18 16:49:56.930: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 16:49:56.933 STEP: Trying to apply a random label on the found node. 04/18/24 16:49:56.942 STEP: verifying the node has the label kubernetes.io/e2e-95170410-bfff-4865-aefe-e9722d19a02e 42 04/18/24 16:49:56.954 STEP: Trying to relaunch the pod, now with labels. 04/18/24 16:49:56.957 Apr 18 16:49:56.962: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-7304" to be "not pending" Apr 18 16:49:56.965: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 2.963492ms Apr 18 16:49:58.969: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 2.007000057s Apr 18 16:49:58.969: INFO: Pod "with-labels" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-95170410-bfff-4865-aefe-e9722d19a02e off the node v126-worker2 04/18/24 16:49:58.972 STEP: verifying the node doesn't have the label kubernetes.io/e2e-95170410-bfff-4865-aefe-e9722d19a02e 04/18/24 16:49:58.985 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:49:58.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-7304" for this suite. 04/18/24 16:49:58.993 ------------------------------ • [SLOW TEST] [6.132 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] test/e2e/scheduling/predicates.go:466 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:49:52.865 Apr 18 16:49:52.866: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 16:49:52.867 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:49:52.878 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:49:52.882 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 16:49:52.886: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 16:49:52.895: INFO: Waiting for terminating namespaces to be deleted... Apr 18 16:49:52.899: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 16:49:52.905: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 16:49:52.906: INFO: Container loopdev ready: true, restart count 0 Apr 18 16:49:52.906: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:49:52.906: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 16:49:52.906: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:49:52.906: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 16:49:52.906: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 16:49:52.912: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 16:49:52.912: INFO: Container loopdev ready: true, restart count 0 Apr 18 16:49:52.912: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:49:52.912: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 16:49:52.912: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:49:52.912: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 16:49:52.912: INFO: pod4 from sched-preemption-path-8395 started at 2024-04-18 16:49:39 +0000 UTC (1 container statuses recorded) Apr 18 16:49:52.912: INFO: Container pod4 ready: true, restart count 0 Apr 18 16:49:52.912: INFO: rs-pod3-m7jss from sched-preemption-path-8395 started at 2024-04-18 16:49:35 +0000 UTC (1 container statuses recorded) Apr 18 16:49:52.912: INFO: Container pod3 ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] test/e2e/scheduling/predicates.go:466 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 16:49:52.912 Apr 18 16:49:52.921: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-7304" to be "running" Apr 18 16:49:52.924: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.141727ms Apr 18 16:49:54.929: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007519302s Apr 18 16:49:56.930: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 4.008241878s Apr 18 16:49:56.930: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 16:49:56.933 STEP: Trying to apply a random label on the found node. 04/18/24 16:49:56.942 STEP: verifying the node has the label kubernetes.io/e2e-95170410-bfff-4865-aefe-e9722d19a02e 42 04/18/24 16:49:56.954 STEP: Trying to relaunch the pod, now with labels. 04/18/24 16:49:56.957 Apr 18 16:49:56.962: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-7304" to be "not pending" Apr 18 16:49:56.965: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 2.963492ms Apr 18 16:49:58.969: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 2.007000057s Apr 18 16:49:58.969: INFO: Pod "with-labels" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-95170410-bfff-4865-aefe-e9722d19a02e off the node v126-worker2 04/18/24 16:49:58.972 STEP: verifying the node doesn't have the label kubernetes.io/e2e-95170410-bfff-4865-aefe-e9722d19a02e 04/18/24 16:49:58.985 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:49:58.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-7304" for this suite. 04/18/24 16:49:58.993 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] test/e2e/apimachinery/namespace.go:268 [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:49:59.008 Apr 18 16:49:59.008: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/18/24 16:49:59.01 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:49:59.02 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:49:59.023 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should patch a Namespace [Conformance] test/e2e/apimachinery/namespace.go:268 STEP: creating a Namespace 04/18/24 16:49:59.027 STEP: patching the Namespace 04/18/24 16:49:59.036 STEP: get the Namespace and ensuring it has the label 04/18/24 16:49:59.039 [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:49:59.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-4216" for this suite. 04/18/24 16:49:59.045 STEP: Destroying namespace "nspatchtest-ce5f0267-a044-41b5-926d-2ff51db897bb-4313" for this suite. 04/18/24 16:49:59.05 ------------------------------ • [0.046 seconds] [sig-api-machinery] Namespaces [Serial] test/e2e/apimachinery/framework.go:23 should patch a Namespace [Conformance] test/e2e/apimachinery/namespace.go:268 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:49:59.008 Apr 18 16:49:59.008: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/18/24 16:49:59.01 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:49:59.02 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:49:59.023 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should patch a Namespace [Conformance] test/e2e/apimachinery/namespace.go:268 STEP: creating a Namespace 04/18/24 16:49:59.027 STEP: patching the Namespace 04/18/24 16:49:59.036 STEP: get the Namespace and ensuring it has the label 04/18/24 16:49:59.039 [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:49:59.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-4216" for this suite. 04/18/24 16:49:59.045 STEP: Destroying namespace "nspatchtest-ce5f0267-a044-41b5-926d-2ff51db897bb-4313" for this suite. 04/18/24 16:49:59.05 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] test/e2e/scheduling/predicates.go:331 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:49:59.056 Apr 18 16:49:59.056: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 16:49:59.057 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:49:59.065 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:49:59.069 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 16:49:59.072: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 16:49:59.078: INFO: Waiting for terminating namespaces to be deleted... Apr 18 16:49:59.081: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 16:49:59.086: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 16:49:59.086: INFO: Container loopdev ready: true, restart count 0 Apr 18 16:49:59.086: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:49:59.086: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 16:49:59.086: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:49:59.086: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 16:49:59.086: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 16:49:59.090: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 16:49:59.090: INFO: Container loopdev ready: true, restart count 0 Apr 18 16:49:59.090: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:49:59.090: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 16:49:59.090: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:49:59.090: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 16:49:59.090: INFO: with-labels from sched-pred-7304 started at 2024-04-18 16:49:56 +0000 UTC (1 container statuses recorded) Apr 18 16:49:59.090: INFO: Container with-labels ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] test/e2e/scheduling/predicates.go:331 STEP: verifying the node has the label node v126-worker 04/18/24 16:49:59.103 STEP: verifying the node has the label node v126-worker2 04/18/24 16:49:59.116 Apr 18 16:49:59.123: INFO: Pod create-loop-devs-w9ldx requesting resource cpu=0m on Node v126-worker Apr 18 16:49:59.123: INFO: Pod create-loop-devs-xnxkn requesting resource cpu=0m on Node v126-worker2 Apr 18 16:49:59.123: INFO: Pod kindnet-68nxx requesting resource cpu=100m on Node v126-worker Apr 18 16:49:59.123: INFO: Pod kindnet-wqc6h requesting resource cpu=100m on Node v126-worker2 Apr 18 16:49:59.123: INFO: Pod kube-proxy-4wtz6 requesting resource cpu=0m on Node v126-worker Apr 18 16:49:59.123: INFO: Pod kube-proxy-hjqqd requesting resource cpu=0m on Node v126-worker2 Apr 18 16:49:59.123: INFO: Pod with-labels requesting resource cpu=0m on Node v126-worker2 STEP: Starting Pods to consume most of the cluster CPU. 04/18/24 16:49:59.123 Apr 18 16:49:59.124: INFO: Creating a pod which consumes cpu=61530m on Node v126-worker Apr 18 16:49:59.129: INFO: Creating a pod which consumes cpu=61530m on Node v126-worker2 Apr 18 16:49:59.133: INFO: Waiting up to 5m0s for pod "filler-pod-a09523e1-591a-473f-a680-fe3847f02c70" in namespace "sched-pred-4396" to be "running" Apr 18 16:49:59.136: INFO: Pod "filler-pod-a09523e1-591a-473f-a680-fe3847f02c70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.358623ms Apr 18 16:50:01.140: INFO: Pod "filler-pod-a09523e1-591a-473f-a680-fe3847f02c70": Phase="Running", Reason="", readiness=true. Elapsed: 2.007026268s Apr 18 16:50:01.140: INFO: Pod "filler-pod-a09523e1-591a-473f-a680-fe3847f02c70" satisfied condition "running" Apr 18 16:50:01.140: INFO: Waiting up to 5m0s for pod "filler-pod-5b0481de-dde5-46dd-a211-9e12c701a662" in namespace "sched-pred-4396" to be "running" Apr 18 16:50:01.143: INFO: Pod "filler-pod-5b0481de-dde5-46dd-a211-9e12c701a662": Phase="Running", Reason="", readiness=true. Elapsed: 2.770865ms Apr 18 16:50:01.143: INFO: Pod "filler-pod-5b0481de-dde5-46dd-a211-9e12c701a662" satisfied condition "running" STEP: Creating another pod that requires unavailable amount of CPU. 04/18/24 16:50:01.143 STEP: Considering event: Type = [Normal], Name = [filler-pod-5b0481de-dde5-46dd-a211-9e12c701a662.17c76de1a4d08860], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4396/filler-pod-5b0481de-dde5-46dd-a211-9e12c701a662 to v126-worker2] 04/18/24 16:50:01.147 STEP: Considering event: Type = [Normal], Name = [filler-pod-5b0481de-dde5-46dd-a211-9e12c701a662.17c76de1cbf32cff], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 16:50:01.147 STEP: Considering event: Type = [Normal], Name = [filler-pod-5b0481de-dde5-46dd-a211-9e12c701a662.17c76de1ccba14f2], Reason = [Created], Message = [Created container filler-pod-5b0481de-dde5-46dd-a211-9e12c701a662] 04/18/24 16:50:01.147 STEP: Considering event: Type = [Normal], Name = [filler-pod-5b0481de-dde5-46dd-a211-9e12c701a662.17c76de1dcec6b77], Reason = [Started], Message = [Started container filler-pod-5b0481de-dde5-46dd-a211-9e12c701a662] 04/18/24 16:50:01.147 STEP: Considering event: Type = [Normal], Name = [filler-pod-a09523e1-591a-473f-a680-fe3847f02c70.17c76de1a48b21e9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4396/filler-pod-a09523e1-591a-473f-a680-fe3847f02c70 to v126-worker] 04/18/24 16:50:01.147 STEP: Considering event: Type = [Normal], Name = [filler-pod-a09523e1-591a-473f-a680-fe3847f02c70.17c76de1ccf9fddc], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 16:50:01.147 STEP: Considering event: Type = [Normal], Name = [filler-pod-a09523e1-591a-473f-a680-fe3847f02c70.17c76de1cd9e8186], Reason = [Created], Message = [Created container filler-pod-a09523e1-591a-473f-a680-fe3847f02c70] 04/18/24 16:50:01.148 STEP: Considering event: Type = [Normal], Name = [filler-pod-a09523e1-591a-473f-a680-fe3847f02c70.17c76de1dcf1a6e8], Reason = [Started], Message = [Started container filler-pod-a09523e1-591a-473f-a680-fe3847f02c70] 04/18/24 16:50:01.148 STEP: Considering event: Type = [Warning], Name = [additional-pod.17c76de21d004b57], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient cpu. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod..] 04/18/24 16:50:01.158 STEP: removing the label node off the node v126-worker 04/18/24 16:50:02.159 STEP: verifying the node doesn't have the label node 04/18/24 16:50:02.17 STEP: removing the label node off the node v126-worker2 04/18/24 16:50:02.173 STEP: verifying the node doesn't have the label node 04/18/24 16:50:02.185 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:50:02.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-4396" for this suite. 04/18/24 16:50:02.192 ------------------------------ • [3.139 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] test/e2e/scheduling/predicates.go:331 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:49:59.056 Apr 18 16:49:59.056: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 16:49:59.057 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:49:59.065 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:49:59.069 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 16:49:59.072: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 16:49:59.078: INFO: Waiting for terminating namespaces to be deleted... Apr 18 16:49:59.081: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 16:49:59.086: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 16:49:59.086: INFO: Container loopdev ready: true, restart count 0 Apr 18 16:49:59.086: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:49:59.086: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 16:49:59.086: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:49:59.086: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 16:49:59.086: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 16:49:59.090: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 16:49:59.090: INFO: Container loopdev ready: true, restart count 0 Apr 18 16:49:59.090: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:49:59.090: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 16:49:59.090: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:49:59.090: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 16:49:59.090: INFO: with-labels from sched-pred-7304 started at 2024-04-18 16:49:56 +0000 UTC (1 container statuses recorded) Apr 18 16:49:59.090: INFO: Container with-labels ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] test/e2e/scheduling/predicates.go:331 STEP: verifying the node has the label node v126-worker 04/18/24 16:49:59.103 STEP: verifying the node has the label node v126-worker2 04/18/24 16:49:59.116 Apr 18 16:49:59.123: INFO: Pod create-loop-devs-w9ldx requesting resource cpu=0m on Node v126-worker Apr 18 16:49:59.123: INFO: Pod create-loop-devs-xnxkn requesting resource cpu=0m on Node v126-worker2 Apr 18 16:49:59.123: INFO: Pod kindnet-68nxx requesting resource cpu=100m on Node v126-worker Apr 18 16:49:59.123: INFO: Pod kindnet-wqc6h requesting resource cpu=100m on Node v126-worker2 Apr 18 16:49:59.123: INFO: Pod kube-proxy-4wtz6 requesting resource cpu=0m on Node v126-worker Apr 18 16:49:59.123: INFO: Pod kube-proxy-hjqqd requesting resource cpu=0m on Node v126-worker2 Apr 18 16:49:59.123: INFO: Pod with-labels requesting resource cpu=0m on Node v126-worker2 STEP: Starting Pods to consume most of the cluster CPU. 04/18/24 16:49:59.123 Apr 18 16:49:59.124: INFO: Creating a pod which consumes cpu=61530m on Node v126-worker Apr 18 16:49:59.129: INFO: Creating a pod which consumes cpu=61530m on Node v126-worker2 Apr 18 16:49:59.133: INFO: Waiting up to 5m0s for pod "filler-pod-a09523e1-591a-473f-a680-fe3847f02c70" in namespace "sched-pred-4396" to be "running" Apr 18 16:49:59.136: INFO: Pod "filler-pod-a09523e1-591a-473f-a680-fe3847f02c70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.358623ms Apr 18 16:50:01.140: INFO: Pod "filler-pod-a09523e1-591a-473f-a680-fe3847f02c70": Phase="Running", Reason="", readiness=true. Elapsed: 2.007026268s Apr 18 16:50:01.140: INFO: Pod "filler-pod-a09523e1-591a-473f-a680-fe3847f02c70" satisfied condition "running" Apr 18 16:50:01.140: INFO: Waiting up to 5m0s for pod "filler-pod-5b0481de-dde5-46dd-a211-9e12c701a662" in namespace "sched-pred-4396" to be "running" Apr 18 16:50:01.143: INFO: Pod "filler-pod-5b0481de-dde5-46dd-a211-9e12c701a662": Phase="Running", Reason="", readiness=true. Elapsed: 2.770865ms Apr 18 16:50:01.143: INFO: Pod "filler-pod-5b0481de-dde5-46dd-a211-9e12c701a662" satisfied condition "running" STEP: Creating another pod that requires unavailable amount of CPU. 04/18/24 16:50:01.143 STEP: Considering event: Type = [Normal], Name = [filler-pod-5b0481de-dde5-46dd-a211-9e12c701a662.17c76de1a4d08860], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4396/filler-pod-5b0481de-dde5-46dd-a211-9e12c701a662 to v126-worker2] 04/18/24 16:50:01.147 STEP: Considering event: Type = [Normal], Name = [filler-pod-5b0481de-dde5-46dd-a211-9e12c701a662.17c76de1cbf32cff], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 16:50:01.147 STEP: Considering event: Type = [Normal], Name = [filler-pod-5b0481de-dde5-46dd-a211-9e12c701a662.17c76de1ccba14f2], Reason = [Created], Message = [Created container filler-pod-5b0481de-dde5-46dd-a211-9e12c701a662] 04/18/24 16:50:01.147 STEP: Considering event: Type = [Normal], Name = [filler-pod-5b0481de-dde5-46dd-a211-9e12c701a662.17c76de1dcec6b77], Reason = [Started], Message = [Started container filler-pod-5b0481de-dde5-46dd-a211-9e12c701a662] 04/18/24 16:50:01.147 STEP: Considering event: Type = [Normal], Name = [filler-pod-a09523e1-591a-473f-a680-fe3847f02c70.17c76de1a48b21e9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4396/filler-pod-a09523e1-591a-473f-a680-fe3847f02c70 to v126-worker] 04/18/24 16:50:01.147 STEP: Considering event: Type = [Normal], Name = [filler-pod-a09523e1-591a-473f-a680-fe3847f02c70.17c76de1ccf9fddc], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 04/18/24 16:50:01.147 STEP: Considering event: Type = [Normal], Name = [filler-pod-a09523e1-591a-473f-a680-fe3847f02c70.17c76de1cd9e8186], Reason = [Created], Message = [Created container filler-pod-a09523e1-591a-473f-a680-fe3847f02c70] 04/18/24 16:50:01.148 STEP: Considering event: Type = [Normal], Name = [filler-pod-a09523e1-591a-473f-a680-fe3847f02c70.17c76de1dcf1a6e8], Reason = [Started], Message = [Started container filler-pod-a09523e1-591a-473f-a680-fe3847f02c70] 04/18/24 16:50:01.148 STEP: Considering event: Type = [Warning], Name = [additional-pod.17c76de21d004b57], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient cpu. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod..] 04/18/24 16:50:01.158 STEP: removing the label node off the node v126-worker 04/18/24 16:50:02.159 STEP: verifying the node doesn't have the label node 04/18/24 16:50:02.17 STEP: removing the label node off the node v126-worker2 04/18/24 16:50:02.173 STEP: verifying the node doesn't have the label node 04/18/24 16:50:02.185 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:50:02.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-4396" for this suite. 04/18/24 16:50:02.192 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] test/e2e/apps/daemon_set.go:443 [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:50:02.199 Apr 18 16:50:02.199: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/18/24 16:50:02.2 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:02.209 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:50:02.212 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should rollback without unnecessary restarts [Conformance] test/e2e/apps/daemon_set.go:443 Apr 18 16:50:02.231: INFO: Create a RollingUpdate DaemonSet Apr 18 16:50:02.235: INFO: Check that daemon pods launch on every node of the cluster Apr 18 16:50:02.239: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:02.241: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:02.241: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:03.245: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:03.247: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:03.247: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:04.244: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:04.247: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 18 16:50:04.247: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set Apr 18 16:50:04.247: INFO: Update the DaemonSet to trigger a rollout Apr 18 16:50:04.254: INFO: Updating DaemonSet daemon-set Apr 18 16:50:07.266: INFO: Roll back the DaemonSet before rollout is complete Apr 18 16:50:07.275: INFO: Updating DaemonSet daemon-set Apr 18 16:50:07.275: INFO: Make sure DaemonSet rollback is complete Apr 18 16:50:07.278: INFO: Wrong image for pod: daemon-set-m5dct. Expected: registry.k8s.io/e2e-test-images/httpd:2.4.38-4, got: foo:non-existent. Apr 18 16:50:07.278: INFO: Pod daemon-set-m5dct is not available Apr 18 16:50:07.281: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:08.289: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:09.289: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:10.285: INFO: Pod daemon-set-dfk9k is not available Apr 18 16:50:10.289: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/18/24 16:50:10.297 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7693, will wait for the garbage collector to delete the pods 04/18/24 16:50:10.297 Apr 18 16:50:10.356: INFO: Deleting DaemonSet.extensions daemon-set took: 5.136472ms Apr 18 16:50:10.456: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.135782ms Apr 18 16:50:12.060: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:12.060: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 18 16:50:12.063: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"48662"},"items":null} Apr 18 16:50:12.069: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"48662"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:50:12.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-7693" for this suite. 04/18/24 16:50:12.085 ------------------------------ • [SLOW TEST] [9.893 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] test/e2e/apps/daemon_set.go:443 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:50:02.199 Apr 18 16:50:02.199: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/18/24 16:50:02.2 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:02.209 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:50:02.212 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should rollback without unnecessary restarts [Conformance] test/e2e/apps/daemon_set.go:443 Apr 18 16:50:02.231: INFO: Create a RollingUpdate DaemonSet Apr 18 16:50:02.235: INFO: Check that daemon pods launch on every node of the cluster Apr 18 16:50:02.239: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:02.241: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:02.241: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:03.245: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:03.247: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:03.247: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:04.244: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:04.247: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 18 16:50:04.247: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set Apr 18 16:50:04.247: INFO: Update the DaemonSet to trigger a rollout Apr 18 16:50:04.254: INFO: Updating DaemonSet daemon-set Apr 18 16:50:07.266: INFO: Roll back the DaemonSet before rollout is complete Apr 18 16:50:07.275: INFO: Updating DaemonSet daemon-set Apr 18 16:50:07.275: INFO: Make sure DaemonSet rollback is complete Apr 18 16:50:07.278: INFO: Wrong image for pod: daemon-set-m5dct. Expected: registry.k8s.io/e2e-test-images/httpd:2.4.38-4, got: foo:non-existent. Apr 18 16:50:07.278: INFO: Pod daemon-set-m5dct is not available Apr 18 16:50:07.281: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:08.289: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:09.289: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:10.285: INFO: Pod daemon-set-dfk9k is not available Apr 18 16:50:10.289: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/18/24 16:50:10.297 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7693, will wait for the garbage collector to delete the pods 04/18/24 16:50:10.297 Apr 18 16:50:10.356: INFO: Deleting DaemonSet.extensions daemon-set took: 5.136472ms Apr 18 16:50:10.456: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.135782ms Apr 18 16:50:12.060: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:12.060: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 18 16:50:12.063: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"48662"},"items":null} Apr 18 16:50:12.069: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"48662"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:50:12.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-7693" for this suite. 04/18/24 16:50:12.085 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] test/e2e/scheduling/predicates.go:443 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:50:12.113 Apr 18 16:50:12.113: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 16:50:12.116 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:12.13 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:50:12.134 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 16:50:12.139: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 16:50:12.146: INFO: Waiting for terminating namespaces to be deleted... Apr 18 16:50:12.149: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 16:50:12.155: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 16:50:12.155: INFO: Container loopdev ready: true, restart count 0 Apr 18 16:50:12.155: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:50:12.155: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 16:50:12.155: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:50:12.155: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 16:50:12.155: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 16:50:12.160: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 16:50:12.160: INFO: Container loopdev ready: true, restart count 0 Apr 18 16:50:12.160: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:50:12.160: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 16:50:12.160: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:50:12.160: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] test/e2e/scheduling/predicates.go:443 STEP: Trying to schedule Pod with nonempty NodeSelector. 04/18/24 16:50:12.16 STEP: Considering event: Type = [Warning], Name = [restricted-pod.17c76de4ae3c3e27], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 04/18/24 16:50:12.184 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:50:13.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-3837" for this suite. 04/18/24 16:50:13.19 ------------------------------ • [1.082 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if not matching [Conformance] test/e2e/scheduling/predicates.go:443 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:50:12.113 Apr 18 16:50:12.113: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 16:50:12.116 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:12.13 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:50:12.134 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 16:50:12.139: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 16:50:12.146: INFO: Waiting for terminating namespaces to be deleted... Apr 18 16:50:12.149: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 16:50:12.155: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 16:50:12.155: INFO: Container loopdev ready: true, restart count 0 Apr 18 16:50:12.155: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:50:12.155: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 16:50:12.155: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:50:12.155: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 16:50:12.155: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 16:50:12.160: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 16:50:12.160: INFO: Container loopdev ready: true, restart count 0 Apr 18 16:50:12.160: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:50:12.160: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 16:50:12.160: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:50:12.160: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] test/e2e/scheduling/predicates.go:443 STEP: Trying to schedule Pod with nonempty NodeSelector. 04/18/24 16:50:12.16 STEP: Considering event: Type = [Warning], Name = [restricted-pod.17c76de4ae3c3e27], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..] 04/18/24 16:50:12.184 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:50:13.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-3837" for this suite. 04/18/24 16:50:13.19 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance] test/e2e/apps/controller_revision.go:124 [BeforeEach] [sig-apps] ControllerRevision [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:50:13.224 Apr 18 16:50:13.224: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename controllerrevisions 04/18/24 16:50:13.225 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:13.236 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:50:13.24 [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:93 [It] should manage the lifecycle of a ControllerRevision [Conformance] test/e2e/apps/controller_revision.go:124 STEP: Creating DaemonSet "e2e-nngfl-daemon-set" 04/18/24 16:50:13.26 STEP: Check that daemon pods launch on every node of the cluster. 04/18/24 16:50:13.266 Apr 18 16:50:13.270: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:13.273: INFO: Number of nodes with available pods controlled by daemonset e2e-nngfl-daemon-set: 0 Apr 18 16:50:13.273: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:14.278: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:14.282: INFO: Number of nodes with available pods controlled by daemonset e2e-nngfl-daemon-set: 0 Apr 18 16:50:14.282: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:15.278: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:15.281: INFO: Number of nodes with available pods controlled by daemonset e2e-nngfl-daemon-set: 2 Apr 18 16:50:15.281: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset e2e-nngfl-daemon-set STEP: Confirm DaemonSet "e2e-nngfl-daemon-set" successfully created with "daemonset-name=e2e-nngfl-daemon-set" label 04/18/24 16:50:15.284 STEP: Listing all ControllerRevisions with label "daemonset-name=e2e-nngfl-daemon-set" 04/18/24 16:50:15.29 Apr 18 16:50:15.294: INFO: Located ControllerRevision: "e2e-nngfl-daemon-set-7b54959b4d" STEP: Patching ControllerRevision "e2e-nngfl-daemon-set-7b54959b4d" 04/18/24 16:50:15.297 Apr 18 16:50:15.305: INFO: e2e-nngfl-daemon-set-7b54959b4d has been patched STEP: Create a new ControllerRevision 04/18/24 16:50:15.305 Apr 18 16:50:15.309: INFO: Created ControllerRevision: e2e-nngfl-daemon-set-685d8c46db STEP: Confirm that there are two ControllerRevisions 04/18/24 16:50:15.309 Apr 18 16:50:15.309: INFO: Requesting list of ControllerRevisions to confirm quantity Apr 18 16:50:15.312: INFO: Found 2 ControllerRevisions STEP: Deleting ControllerRevision "e2e-nngfl-daemon-set-7b54959b4d" 04/18/24 16:50:15.312 STEP: Confirm that there is only one ControllerRevision 04/18/24 16:50:15.317 Apr 18 16:50:15.317: INFO: Requesting list of ControllerRevisions to confirm quantity Apr 18 16:50:15.320: INFO: Found 1 ControllerRevisions STEP: Updating ControllerRevision "e2e-nngfl-daemon-set-685d8c46db" 04/18/24 16:50:15.322 Apr 18 16:50:15.330: INFO: e2e-nngfl-daemon-set-685d8c46db has been updated STEP: Generate another ControllerRevision by patching the Daemonset 04/18/24 16:50:15.33 W0418 16:50:15.349235 17 warnings.go:70] unknown field "updateStrategy" STEP: Confirm that there are two ControllerRevisions 04/18/24 16:50:15.349 Apr 18 16:50:15.349: INFO: Requesting list of ControllerRevisions to confirm quantity Apr 18 16:50:16.352: INFO: Requesting list of ControllerRevisions to confirm quantity Apr 18 16:50:16.356: INFO: Found 2 ControllerRevisions STEP: Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-nngfl-daemon-set-685d8c46db=updated" 04/18/24 16:50:16.356 STEP: Confirm that there is only one ControllerRevision 04/18/24 16:50:16.363 Apr 18 16:50:16.363: INFO: Requesting list of ControllerRevisions to confirm quantity Apr 18 16:50:16.366: INFO: Found 1 ControllerRevisions Apr 18 16:50:16.368: INFO: ControllerRevision "e2e-nngfl-daemon-set-8694b889f5" has revision 3 [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:58 STEP: Deleting DaemonSet "e2e-nngfl-daemon-set" 04/18/24 16:50:16.371 STEP: deleting DaemonSet.extensions e2e-nngfl-daemon-set in namespace controllerrevisions-6279, will wait for the garbage collector to delete the pods 04/18/24 16:50:16.371 Apr 18 16:50:16.430: INFO: Deleting DaemonSet.extensions e2e-nngfl-daemon-set took: 4.748459ms Apr 18 16:50:16.530: INFO: Terminating DaemonSet.extensions e2e-nngfl-daemon-set pods took: 100.544619ms Apr 18 16:50:18.134: INFO: Number of nodes with available pods controlled by daemonset e2e-nngfl-daemon-set: 0 Apr 18 16:50:18.134: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-nngfl-daemon-set Apr 18 16:50:18.137: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"48761"},"items":null} Apr 18 16:50:18.140: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"48761"},"items":null} [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:50:18.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "controllerrevisions-6279" for this suite. 04/18/24 16:50:18.155 ------------------------------ • [4.936 seconds] [sig-apps] ControllerRevision [Serial] test/e2e/apps/framework.go:23 should manage the lifecycle of a ControllerRevision [Conformance] test/e2e/apps/controller_revision.go:124 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] ControllerRevision [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:50:13.224 Apr 18 16:50:13.224: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename controllerrevisions 04/18/24 16:50:13.225 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:13.236 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:50:13.24 [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:93 [It] should manage the lifecycle of a ControllerRevision [Conformance] test/e2e/apps/controller_revision.go:124 STEP: Creating DaemonSet "e2e-nngfl-daemon-set" 04/18/24 16:50:13.26 STEP: Check that daemon pods launch on every node of the cluster. 04/18/24 16:50:13.266 Apr 18 16:50:13.270: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:13.273: INFO: Number of nodes with available pods controlled by daemonset e2e-nngfl-daemon-set: 0 Apr 18 16:50:13.273: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:14.278: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:14.282: INFO: Number of nodes with available pods controlled by daemonset e2e-nngfl-daemon-set: 0 Apr 18 16:50:14.282: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:15.278: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:15.281: INFO: Number of nodes with available pods controlled by daemonset e2e-nngfl-daemon-set: 2 Apr 18 16:50:15.281: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset e2e-nngfl-daemon-set STEP: Confirm DaemonSet "e2e-nngfl-daemon-set" successfully created with "daemonset-name=e2e-nngfl-daemon-set" label 04/18/24 16:50:15.284 STEP: Listing all ControllerRevisions with label "daemonset-name=e2e-nngfl-daemon-set" 04/18/24 16:50:15.29 Apr 18 16:50:15.294: INFO: Located ControllerRevision: "e2e-nngfl-daemon-set-7b54959b4d" STEP: Patching ControllerRevision "e2e-nngfl-daemon-set-7b54959b4d" 04/18/24 16:50:15.297 Apr 18 16:50:15.305: INFO: e2e-nngfl-daemon-set-7b54959b4d has been patched STEP: Create a new ControllerRevision 04/18/24 16:50:15.305 Apr 18 16:50:15.309: INFO: Created ControllerRevision: e2e-nngfl-daemon-set-685d8c46db STEP: Confirm that there are two ControllerRevisions 04/18/24 16:50:15.309 Apr 18 16:50:15.309: INFO: Requesting list of ControllerRevisions to confirm quantity Apr 18 16:50:15.312: INFO: Found 2 ControllerRevisions STEP: Deleting ControllerRevision "e2e-nngfl-daemon-set-7b54959b4d" 04/18/24 16:50:15.312 STEP: Confirm that there is only one ControllerRevision 04/18/24 16:50:15.317 Apr 18 16:50:15.317: INFO: Requesting list of ControllerRevisions to confirm quantity Apr 18 16:50:15.320: INFO: Found 1 ControllerRevisions STEP: Updating ControllerRevision "e2e-nngfl-daemon-set-685d8c46db" 04/18/24 16:50:15.322 Apr 18 16:50:15.330: INFO: e2e-nngfl-daemon-set-685d8c46db has been updated STEP: Generate another ControllerRevision by patching the Daemonset 04/18/24 16:50:15.33 W0418 16:50:15.349235 17 warnings.go:70] unknown field "updateStrategy" STEP: Confirm that there are two ControllerRevisions 04/18/24 16:50:15.349 Apr 18 16:50:15.349: INFO: Requesting list of ControllerRevisions to confirm quantity Apr 18 16:50:16.352: INFO: Requesting list of ControllerRevisions to confirm quantity Apr 18 16:50:16.356: INFO: Found 2 ControllerRevisions STEP: Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-nngfl-daemon-set-685d8c46db=updated" 04/18/24 16:50:16.356 STEP: Confirm that there is only one ControllerRevision 04/18/24 16:50:16.363 Apr 18 16:50:16.363: INFO: Requesting list of ControllerRevisions to confirm quantity Apr 18 16:50:16.366: INFO: Found 1 ControllerRevisions Apr 18 16:50:16.368: INFO: ControllerRevision "e2e-nngfl-daemon-set-8694b889f5" has revision 3 [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:58 STEP: Deleting DaemonSet "e2e-nngfl-daemon-set" 04/18/24 16:50:16.371 STEP: deleting DaemonSet.extensions e2e-nngfl-daemon-set in namespace controllerrevisions-6279, will wait for the garbage collector to delete the pods 04/18/24 16:50:16.371 Apr 18 16:50:16.430: INFO: Deleting DaemonSet.extensions e2e-nngfl-daemon-set took: 4.748459ms Apr 18 16:50:16.530: INFO: Terminating DaemonSet.extensions e2e-nngfl-daemon-set pods took: 100.544619ms Apr 18 16:50:18.134: INFO: Number of nodes with available pods controlled by daemonset e2e-nngfl-daemon-set: 0 Apr 18 16:50:18.134: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-nngfl-daemon-set Apr 18 16:50:18.137: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"48761"},"items":null} Apr 18 16:50:18.140: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"48761"},"items":null} [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:50:18.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "controllerrevisions-6279" for this suite. 04/18/24 16:50:18.155 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:243 [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:50:18.177 Apr 18 16:50:18.177: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/18/24 16:50:18.179 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:18.189 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:50:18.193 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should ensure that all pods are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:243 STEP: Creating a test namespace 04/18/24 16:50:18.196 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:18.209 STEP: Creating a pod in the namespace 04/18/24 16:50:18.212 STEP: Waiting for the pod to have running status 04/18/24 16:50:18.22 Apr 18 16:50:18.220: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "nsdeletetest-532" to be "running" Apr 18 16:50:18.223: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.768272ms Apr 18 16:50:20.227: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.006454289s Apr 18 16:50:20.227: INFO: Pod "test-pod" satisfied condition "running" STEP: Deleting the namespace 04/18/24 16:50:20.227 STEP: Waiting for the namespace to be removed. 04/18/24 16:50:20.232 STEP: Recreating the namespace 04/18/24 16:50:31.236 STEP: Verifying there are no pods in the namespace 04/18/24 16:50:31.248 [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:50:31.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-9658" for this suite. 04/18/24 16:50:31.255 STEP: Destroying namespace "nsdeletetest-532" for this suite. 04/18/24 16:50:31.26 Apr 18 16:50:31.263: INFO: Namespace nsdeletetest-532 was already deleted STEP: Destroying namespace "nsdeletetest-1655" for this suite. 04/18/24 16:50:31.263 ------------------------------ • [SLOW TEST] [13.091 seconds] [sig-api-machinery] Namespaces [Serial] test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:243 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:50:18.177 Apr 18 16:50:18.177: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/18/24 16:50:18.179 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:18.189 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:50:18.193 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should ensure that all pods are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:243 STEP: Creating a test namespace 04/18/24 16:50:18.196 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:18.209 STEP: Creating a pod in the namespace 04/18/24 16:50:18.212 STEP: Waiting for the pod to have running status 04/18/24 16:50:18.22 Apr 18 16:50:18.220: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "nsdeletetest-532" to be "running" Apr 18 16:50:18.223: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.768272ms Apr 18 16:50:20.227: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.006454289s Apr 18 16:50:20.227: INFO: Pod "test-pod" satisfied condition "running" STEP: Deleting the namespace 04/18/24 16:50:20.227 STEP: Waiting for the namespace to be removed. 04/18/24 16:50:20.232 STEP: Recreating the namespace 04/18/24 16:50:31.236 STEP: Verifying there are no pods in the namespace 04/18/24 16:50:31.248 [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:50:31.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-9658" for this suite. 04/18/24 16:50:31.255 STEP: Destroying namespace "nsdeletetest-532" for this suite. 04/18/24 16:50:31.26 Apr 18 16:50:31.263: INFO: Namespace nsdeletetest-532 was already deleted STEP: Destroying namespace "nsdeletetest-1655" for this suite. 04/18/24 16:50:31.263 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/apps/daemon_set.go:385 [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:50:31.299 Apr 18 16:50:31.299: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/18/24 16:50:31.301 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:31.312 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:50:31.316 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/apps/daemon_set.go:385 Apr 18 16:50:31.336: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. 04/18/24 16:50:31.341 Apr 18 16:50:31.346: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:31.348: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:31.349: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:32.353: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:32.357: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:32.357: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:33.358: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:33.362: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 18 16:50:33.362: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Update daemon pods image. 04/18/24 16:50:33.376 STEP: Check that daemon pods images are updated. 04/18/24 16:50:33.387 Apr 18 16:50:33.391: INFO: Wrong image for pod: daemon-set-8dbkh. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 18 16:50:33.391: INFO: Wrong image for pod: daemon-set-d2mmv. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 18 16:50:33.395: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:34.400: INFO: Wrong image for pod: daemon-set-d2mmv. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 18 16:50:34.404: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:35.399: INFO: Wrong image for pod: daemon-set-d2mmv. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 18 16:50:35.403: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:36.399: INFO: Wrong image for pod: daemon-set-d2mmv. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 18 16:50:36.399: INFO: Pod daemon-set-wkl95 is not available Apr 18 16:50:36.403: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:37.400: INFO: Wrong image for pod: daemon-set-d2mmv. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 18 16:50:37.400: INFO: Pod daemon-set-wkl95 is not available Apr 18 16:50:37.404: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:38.403: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:39.400: INFO: Pod daemon-set-lrjf5 is not available Apr 18 16:50:39.403: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. 04/18/24 16:50:39.403 Apr 18 16:50:39.407: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:39.410: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 18 16:50:39.410: INFO: Node v126-worker2 is running 0 daemon pod, expected 1 Apr 18 16:50:40.416: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:40.420: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 18 16:50:40.420: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/18/24 16:50:40.436 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3675, will wait for the garbage collector to delete the pods 04/18/24 16:50:40.437 Apr 18 16:50:40.495: INFO: Deleting DaemonSet.extensions daemon-set took: 5.063757ms Apr 18 16:50:40.595: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.098867ms Apr 18 16:50:43.198: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:43.198: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 18 16:50:43.201: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"48943"},"items":null} Apr 18 16:50:43.203: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"48943"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:50:43.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-3675" for this suite. 04/18/24 16:50:43.216 ------------------------------ • [SLOW TEST] [11.922 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/apps/daemon_set.go:385 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:50:31.299 Apr 18 16:50:31.299: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/18/24 16:50:31.301 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:31.312 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:50:31.316 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/apps/daemon_set.go:385 Apr 18 16:50:31.336: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. 04/18/24 16:50:31.341 Apr 18 16:50:31.346: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:31.348: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:31.349: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:32.353: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:32.357: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:32.357: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:33.358: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:33.362: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 18 16:50:33.362: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Update daemon pods image. 04/18/24 16:50:33.376 STEP: Check that daemon pods images are updated. 04/18/24 16:50:33.387 Apr 18 16:50:33.391: INFO: Wrong image for pod: daemon-set-8dbkh. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 18 16:50:33.391: INFO: Wrong image for pod: daemon-set-d2mmv. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 18 16:50:33.395: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:34.400: INFO: Wrong image for pod: daemon-set-d2mmv. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 18 16:50:34.404: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:35.399: INFO: Wrong image for pod: daemon-set-d2mmv. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 18 16:50:35.403: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:36.399: INFO: Wrong image for pod: daemon-set-d2mmv. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 18 16:50:36.399: INFO: Pod daemon-set-wkl95 is not available Apr 18 16:50:36.403: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:37.400: INFO: Wrong image for pod: daemon-set-d2mmv. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. Apr 18 16:50:37.400: INFO: Pod daemon-set-wkl95 is not available Apr 18 16:50:37.404: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:38.403: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:39.400: INFO: Pod daemon-set-lrjf5 is not available Apr 18 16:50:39.403: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. 04/18/24 16:50:39.403 Apr 18 16:50:39.407: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:39.410: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 18 16:50:39.410: INFO: Node v126-worker2 is running 0 daemon pod, expected 1 Apr 18 16:50:40.416: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:40.420: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 18 16:50:40.420: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/18/24 16:50:40.436 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3675, will wait for the garbage collector to delete the pods 04/18/24 16:50:40.437 Apr 18 16:50:40.495: INFO: Deleting DaemonSet.extensions daemon-set took: 5.063757ms Apr 18 16:50:40.595: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.098867ms Apr 18 16:50:43.198: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:43.198: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 18 16:50:43.201: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"48943"},"items":null} Apr 18 16:50:43.203: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"48943"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:50:43.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-3675" for this suite. 04/18/24 16:50:43.216 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should apply a finalizer to a Namespace [Conformance] test/e2e/apimachinery/namespace.go:394 [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:50:43.226 Apr 18 16:50:43.227: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/18/24 16:50:43.228 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:43.237 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:50:43.241 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should apply a finalizer to a Namespace [Conformance] test/e2e/apimachinery/namespace.go:394 STEP: Creating namespace "e2e-ns-44gvm" 04/18/24 16:50:43.245 Apr 18 16:50:43.253: INFO: Namespace "e2e-ns-44gvm-3403" has []v1.FinalizerName{"kubernetes"} STEP: Adding e2e finalizer to namespace "e2e-ns-44gvm-3403" 04/18/24 16:50:43.253 Apr 18 16:50:43.260: INFO: Namespace "e2e-ns-44gvm-3403" has []v1.FinalizerName{"kubernetes", "e2e.example.com/fakeFinalizer"} STEP: Removing e2e finalizer from namespace "e2e-ns-44gvm-3403" 04/18/24 16:50:43.26 Apr 18 16:50:43.267: INFO: Namespace "e2e-ns-44gvm-3403" has []v1.FinalizerName{"kubernetes"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:50:43.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-1253" for this suite. 04/18/24 16:50:43.271 STEP: Destroying namespace "e2e-ns-44gvm-3403" for this suite. 04/18/24 16:50:43.275 ------------------------------ • [0.053 seconds] [sig-api-machinery] Namespaces [Serial] test/e2e/apimachinery/framework.go:23 should apply a finalizer to a Namespace [Conformance] test/e2e/apimachinery/namespace.go:394 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:50:43.226 Apr 18 16:50:43.227: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/18/24 16:50:43.228 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:43.237 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:50:43.241 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should apply a finalizer to a Namespace [Conformance] test/e2e/apimachinery/namespace.go:394 STEP: Creating namespace "e2e-ns-44gvm" 04/18/24 16:50:43.245 Apr 18 16:50:43.253: INFO: Namespace "e2e-ns-44gvm-3403" has []v1.FinalizerName{"kubernetes"} STEP: Adding e2e finalizer to namespace "e2e-ns-44gvm-3403" 04/18/24 16:50:43.253 Apr 18 16:50:43.260: INFO: Namespace "e2e-ns-44gvm-3403" has []v1.FinalizerName{"kubernetes", "e2e.example.com/fakeFinalizer"} STEP: Removing e2e finalizer from namespace "e2e-ns-44gvm-3403" 04/18/24 16:50:43.26 Apr 18 16:50:43.267: INFO: Namespace "e2e-ns-44gvm-3403" has []v1.FinalizerName{"kubernetes"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:50:43.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-1253" for this suite. 04/18/24 16:50:43.271 STEP: Destroying namespace "e2e-ns-44gvm-3403" for this suite. 04/18/24 16:50:43.275 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should apply an update to a Namespace [Conformance] test/e2e/apimachinery/namespace.go:366 [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:50:43.319 Apr 18 16:50:43.319: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/18/24 16:50:43.321 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:43.329 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:50:43.332 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should apply an update to a Namespace [Conformance] test/e2e/apimachinery/namespace.go:366 STEP: Updating Namespace "namespaces-3797" 04/18/24 16:50:43.336 Apr 18 16:50:43.341: INFO: Namespace "namespaces-3797" now has labels, map[string]string{"e2e-framework":"namespaces", "e2e-run":"589c8b1f-48ba-4a40-ae30-91bb4f146584", "kubernetes.io/metadata.name":"namespaces-3797", "namespaces-3797":"updated", "pod-security.kubernetes.io/enforce":"baseline"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:50:43.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-3797" for this suite. 04/18/24 16:50:43.345 ------------------------------ • [0.030 seconds] [sig-api-machinery] Namespaces [Serial] test/e2e/apimachinery/framework.go:23 should apply an update to a Namespace [Conformance] test/e2e/apimachinery/namespace.go:366 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:50:43.319 Apr 18 16:50:43.319: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/18/24 16:50:43.321 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:43.329 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:50:43.332 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should apply an update to a Namespace [Conformance] test/e2e/apimachinery/namespace.go:366 STEP: Updating Namespace "namespaces-3797" 04/18/24 16:50:43.336 Apr 18 16:50:43.341: INFO: Namespace "namespaces-3797" now has labels, map[string]string{"e2e-framework":"namespaces", "e2e-run":"589c8b1f-48ba-4a40-ae30-91bb4f146584", "kubernetes.io/metadata.name":"namespaces-3797", "namespaces-3797":"updated", "pod-security.kubernetes.io/enforce":"baseline"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:50:43.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-3797" for this suite. 04/18/24 16:50:43.345 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance] test/e2e/apps/daemon_set.go:873 [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:50:43.361 Apr 18 16:50:43.361: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/18/24 16:50:43.363 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:43.372 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:50:43.376 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should verify changes to a daemon set status [Conformance] test/e2e/apps/daemon_set.go:873 STEP: Creating simple DaemonSet "daemon-set" 04/18/24 16:50:43.394 STEP: Check that daemon pods launch on every node of the cluster. 04/18/24 16:50:43.4 Apr 18 16:50:43.403: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:43.406: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:43.406: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:44.410: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:44.413: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:44.413: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:45.411: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:45.415: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 18 16:50:45.415: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Getting /status 04/18/24 16:50:45.418 Apr 18 16:50:45.422: INFO: Daemon Set daemon-set has Conditions: [] STEP: updating the DaemonSet Status 04/18/24 16:50:45.422 Apr 18 16:50:45.432: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the daemon set status to be updated 04/18/24 16:50:45.432 Apr 18 16:50:45.435: INFO: Observed &DaemonSet event: ADDED Apr 18 16:50:45.435: INFO: Observed &DaemonSet event: MODIFIED Apr 18 16:50:45.435: INFO: Observed &DaemonSet event: MODIFIED Apr 18 16:50:45.435: INFO: Observed &DaemonSet event: MODIFIED Apr 18 16:50:45.435: INFO: Found daemon set daemon-set in namespace daemonsets-6191 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Apr 18 16:50:45.435: INFO: Daemon set daemon-set has an updated status STEP: patching the DaemonSet Status 04/18/24 16:50:45.435 STEP: watching for the daemon set status to be patched 04/18/24 16:50:45.443 Apr 18 16:50:45.446: INFO: Observed &DaemonSet event: ADDED Apr 18 16:50:45.446: INFO: Observed &DaemonSet event: MODIFIED Apr 18 16:50:45.446: INFO: Observed &DaemonSet event: MODIFIED Apr 18 16:50:45.446: INFO: Observed &DaemonSet event: MODIFIED Apr 18 16:50:45.446: INFO: Observed daemon set daemon-set in namespace daemonsets-6191 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Apr 18 16:50:45.447: INFO: Observed &DaemonSet event: MODIFIED Apr 18 16:50:45.447: INFO: Found daemon set daemon-set in namespace daemonsets-6191 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] Apr 18 16:50:45.447: INFO: Daemon set daemon-set has a patched status [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/18/24 16:50:45.45 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6191, will wait for the garbage collector to delete the pods 04/18/24 16:50:45.45 Apr 18 16:50:45.508: INFO: Deleting DaemonSet.extensions daemon-set took: 4.474003ms Apr 18 16:50:45.608: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.858687ms Apr 18 16:50:48.212: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:48.212: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 18 16:50:48.214: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"49009"},"items":null} Apr 18 16:50:48.217: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"49009"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:50:48.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-6191" for this suite. 04/18/24 16:50:48.232 ------------------------------ • [4.877 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should verify changes to a daemon set status [Conformance] test/e2e/apps/daemon_set.go:873 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:50:43.361 Apr 18 16:50:43.361: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/18/24 16:50:43.363 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:43.372 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:50:43.376 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should verify changes to a daemon set status [Conformance] test/e2e/apps/daemon_set.go:873 STEP: Creating simple DaemonSet "daemon-set" 04/18/24 16:50:43.394 STEP: Check that daemon pods launch on every node of the cluster. 04/18/24 16:50:43.4 Apr 18 16:50:43.403: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:43.406: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:43.406: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:44.410: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:44.413: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:44.413: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:45.411: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:45.415: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 18 16:50:45.415: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Getting /status 04/18/24 16:50:45.418 Apr 18 16:50:45.422: INFO: Daemon Set daemon-set has Conditions: [] STEP: updating the DaemonSet Status 04/18/24 16:50:45.422 Apr 18 16:50:45.432: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the daemon set status to be updated 04/18/24 16:50:45.432 Apr 18 16:50:45.435: INFO: Observed &DaemonSet event: ADDED Apr 18 16:50:45.435: INFO: Observed &DaemonSet event: MODIFIED Apr 18 16:50:45.435: INFO: Observed &DaemonSet event: MODIFIED Apr 18 16:50:45.435: INFO: Observed &DaemonSet event: MODIFIED Apr 18 16:50:45.435: INFO: Found daemon set daemon-set in namespace daemonsets-6191 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Apr 18 16:50:45.435: INFO: Daemon set daemon-set has an updated status STEP: patching the DaemonSet Status 04/18/24 16:50:45.435 STEP: watching for the daemon set status to be patched 04/18/24 16:50:45.443 Apr 18 16:50:45.446: INFO: Observed &DaemonSet event: ADDED Apr 18 16:50:45.446: INFO: Observed &DaemonSet event: MODIFIED Apr 18 16:50:45.446: INFO: Observed &DaemonSet event: MODIFIED Apr 18 16:50:45.446: INFO: Observed &DaemonSet event: MODIFIED Apr 18 16:50:45.446: INFO: Observed daemon set daemon-set in namespace daemonsets-6191 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Apr 18 16:50:45.447: INFO: Observed &DaemonSet event: MODIFIED Apr 18 16:50:45.447: INFO: Found daemon set daemon-set in namespace daemonsets-6191 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] Apr 18 16:50:45.447: INFO: Daemon set daemon-set has a patched status [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/18/24 16:50:45.45 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6191, will wait for the garbage collector to delete the pods 04/18/24 16:50:45.45 Apr 18 16:50:45.508: INFO: Deleting DaemonSet.extensions daemon-set took: 4.474003ms Apr 18 16:50:45.608: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.858687ms Apr 18 16:50:48.212: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:48.212: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 18 16:50:48.214: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"49009"},"items":null} Apr 18 16:50:48.217: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"49009"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:50:48.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-6191" for this suite. 04/18/24 16:50:48.232 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:305 [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:50:48.294 Apr 18 16:50:48.294: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/18/24 16:50:48.296 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:48.305 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:50:48.308 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:305 STEP: Creating a simple DaemonSet "daemon-set" 04/18/24 16:50:48.325 STEP: Check that daemon pods launch on every node of the cluster. 04/18/24 16:50:48.33 Apr 18 16:50:48.333: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:48.336: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:48.336: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:49.340: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:49.342: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:49.342: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:50.341: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:50.345: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 18 16:50:50.345: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 04/18/24 16:50:50.348 Apr 18 16:50:50.363: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:50.366: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 18 16:50:50.366: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:51.371: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:51.375: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 18 16:50:51.375: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:52.370: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:52.374: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 18 16:50:52.374: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Wait for the failed daemon pod to be completely deleted. 04/18/24 16:50:52.374 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/18/24 16:50:52.379 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6662, will wait for the garbage collector to delete the pods 04/18/24 16:50:52.379 Apr 18 16:50:52.438: INFO: Deleting DaemonSet.extensions daemon-set took: 4.742098ms Apr 18 16:50:52.538: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.384408ms Apr 18 16:50:55.242: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:55.242: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 18 16:50:55.245: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"49137"},"items":null} Apr 18 16:50:55.247: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"49137"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:50:55.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-6662" for this suite. 04/18/24 16:50:55.262 ------------------------------ • [SLOW TEST] [6.973 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:305 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:50:48.294 Apr 18 16:50:48.294: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/18/24 16:50:48.296 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:48.305 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:50:48.308 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:305 STEP: Creating a simple DaemonSet "daemon-set" 04/18/24 16:50:48.325 STEP: Check that daemon pods launch on every node of the cluster. 04/18/24 16:50:48.33 Apr 18 16:50:48.333: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:48.336: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:48.336: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:49.340: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:49.342: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:49.342: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:50.341: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:50.345: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 18 16:50:50.345: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 04/18/24 16:50:50.348 Apr 18 16:50:50.363: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:50.366: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 18 16:50:50.366: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:51.371: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:51.375: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 18 16:50:51.375: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:52.370: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:50:52.374: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 18 16:50:52.374: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Wait for the failed daemon pod to be completely deleted. 04/18/24 16:50:52.374 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/18/24 16:50:52.379 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6662, will wait for the garbage collector to delete the pods 04/18/24 16:50:52.379 Apr 18 16:50:52.438: INFO: Deleting DaemonSet.extensions daemon-set took: 4.742098ms Apr 18 16:50:52.538: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.384408ms Apr 18 16:50:55.242: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:55.242: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 18 16:50:55.245: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"49137"},"items":null} Apr 18 16:50:55.247: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"49137"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:50:55.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-6662" for this suite. 04/18/24 16:50:55.262 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] test/e2e/apps/daemon_set.go:205 [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:50:55.326 Apr 18 16:50:55.326: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/18/24 16:50:55.328 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:55.339 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:50:55.343 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should run and stop complex daemon [Conformance] test/e2e/apps/daemon_set.go:205 Apr 18 16:50:55.360: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. 04/18/24 16:50:55.364 Apr 18 16:50:55.367: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:55.367: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set STEP: Change node label to blue, check that daemon pod is launched. 04/18/24 16:50:55.367 Apr 18 16:50:55.384: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:55.384: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:56.387: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 18 16:50:56.387: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set STEP: Update the node label to green, and wait for daemons to be unscheduled 04/18/24 16:50:56.39 Apr 18 16:50:56.403: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 18 16:50:56.403: INFO: Number of running nodes: 0, number of available pods: 1 in daemonset daemon-set Apr 18 16:50:57.409: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:57.409: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate 04/18/24 16:50:57.409 Apr 18 16:50:57.420: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:57.420: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:58.424: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:58.424: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:59.424: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:59.424: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:51:00.424: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 18 16:51:00.424: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/18/24 16:51:00.43 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-523, will wait for the garbage collector to delete the pods 04/18/24 16:51:00.43 Apr 18 16:51:00.487: INFO: Deleting DaemonSet.extensions daemon-set took: 4.390731ms Apr 18 16:51:00.588: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.939195ms Apr 18 16:51:03.291: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:51:03.292: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 18 16:51:03.294: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"49225"},"items":null} Apr 18 16:51:03.297: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"49225"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:51:03.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-523" for this suite. 04/18/24 16:51:03.319 ------------------------------ • [SLOW TEST] [7.998 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] test/e2e/apps/daemon_set.go:205 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:50:55.326 Apr 18 16:50:55.326: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/18/24 16:50:55.328 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:50:55.339 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:50:55.343 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should run and stop complex daemon [Conformance] test/e2e/apps/daemon_set.go:205 Apr 18 16:50:55.360: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. 04/18/24 16:50:55.364 Apr 18 16:50:55.367: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:55.367: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set STEP: Change node label to blue, check that daemon pod is launched. 04/18/24 16:50:55.367 Apr 18 16:50:55.384: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:55.384: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:56.387: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 18 16:50:56.387: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set STEP: Update the node label to green, and wait for daemons to be unscheduled 04/18/24 16:50:56.39 Apr 18 16:50:56.403: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 18 16:50:56.403: INFO: Number of running nodes: 0, number of available pods: 1 in daemonset daemon-set Apr 18 16:50:57.409: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:57.409: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate 04/18/24 16:50:57.409 Apr 18 16:50:57.420: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:57.420: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:58.424: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:58.424: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:50:59.424: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:50:59.424: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:51:00.424: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 18 16:51:00.424: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/18/24 16:51:00.43 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-523, will wait for the garbage collector to delete the pods 04/18/24 16:51:00.43 Apr 18 16:51:00.487: INFO: Deleting DaemonSet.extensions daemon-set took: 4.390731ms Apr 18 16:51:00.588: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.939195ms Apr 18 16:51:03.291: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:51:03.292: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 18 16:51:03.294: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"49225"},"items":null} Apr 18 16:51:03.297: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"49225"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:51:03.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-523" for this suite. 04/18/24 16:51:03.319 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance] test/e2e/apimachinery/namespace.go:299 [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:51:03.333 Apr 18 16:51:03.333: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/18/24 16:51:03.335 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:51:03.345 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:51:03.348 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should apply changes to a namespace status [Conformance] test/e2e/apimachinery/namespace.go:299 STEP: Read namespace status 04/18/24 16:51:03.352 Apr 18 16:51:03.356: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} STEP: Patch namespace status 04/18/24 16:51:03.356 Apr 18 16:51:03.361: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} STEP: Update namespace status 04/18/24 16:51:03.361 Apr 18 16:51:03.369: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:51:03.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-8177" for this suite. 04/18/24 16:51:03.372 ------------------------------ • [0.044 seconds] [sig-api-machinery] Namespaces [Serial] test/e2e/apimachinery/framework.go:23 should apply changes to a namespace status [Conformance] test/e2e/apimachinery/namespace.go:299 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-api-machinery] Namespaces [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:51:03.333 Apr 18 16:51:03.333: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 04/18/24 16:51:03.335 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:51:03.345 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:51:03.348 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:31 [It] should apply changes to a namespace status [Conformance] test/e2e/apimachinery/namespace.go:299 STEP: Read namespace status 04/18/24 16:51:03.352 Apr 18 16:51:03.356: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} STEP: Patch namespace status 04/18/24 16:51:03.356 Apr 18 16:51:03.361: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} STEP: Update namespace status 04/18/24 16:51:03.361 Apr 18 16:51:03.369: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:51:03.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "namespaces-8177" for this suite. 04/18/24 16:51:03.372 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] test/e2e/storage/empty_dir_wrapper.go:189 [BeforeEach] [sig-storage] EmptyDir wrapper volumes set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:51:03.387 Apr 18 16:51:03.387: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper 04/18/24 16:51:03.388 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:51:03.397 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:51:03.4 [BeforeEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/metrics/init/init.go:31 [It] should not cause race condition when used for configmaps [Serial] [Conformance] test/e2e/storage/empty_dir_wrapper.go:189 STEP: Creating 50 configmaps 04/18/24 16:51:03.404 STEP: Creating RC which spawns configmap-volume pods 04/18/24 16:51:03.642 Apr 18 16:51:03.765: INFO: Pod name wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317: Found 5 pods out of 5 STEP: Ensuring each pod is running 04/18/24 16:51:03.765 Apr 18 16:51:03.765: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5hmts" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:03.792: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5hmts": Phase="Pending", Reason="", readiness=false. Elapsed: 26.658172ms Apr 18 16:51:05.798: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5hmts": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032783577s Apr 18 16:51:07.799: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5hmts": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033264118s Apr 18 16:51:09.797: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5hmts": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031738659s Apr 18 16:51:11.798: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5hmts": Phase="Pending", Reason="", readiness=false. Elapsed: 8.032940859s Apr 18 16:51:13.798: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5hmts": Phase="Pending", Reason="", readiness=false. Elapsed: 10.03257292s Apr 18 16:51:15.798: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5hmts": Phase="Pending", Reason="", readiness=false. Elapsed: 12.03229131s Apr 18 16:51:17.798: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5hmts": Phase="Running", Reason="", readiness=true. Elapsed: 14.032377646s Apr 18 16:51:17.798: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5hmts" satisfied condition "running" Apr 18 16:51:17.798: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5t99b" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:17.802: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5t99b": Phase="Running", Reason="", readiness=true. Elapsed: 4.064872ms Apr 18 16:51:17.802: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5t99b" satisfied condition "running" Apr 18 16:51:17.802: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-crbvb" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:17.806: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-crbvb": Phase="Running", Reason="", readiness=true. Elapsed: 4.119906ms Apr 18 16:51:17.806: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-crbvb" satisfied condition "running" Apr 18 16:51:17.806: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-fmwkk" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:17.810: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-fmwkk": Phase="Running", Reason="", readiness=true. Elapsed: 3.805234ms Apr 18 16:51:17.810: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-fmwkk" satisfied condition "running" Apr 18 16:51:17.810: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-k57jp" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:17.813: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-k57jp": Phase="Running", Reason="", readiness=true. Elapsed: 3.556887ms Apr 18 16:51:17.813: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-k57jp" satisfied condition "running" STEP: deleting ReplicationController wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317 in namespace emptydir-wrapper-7019, will wait for the garbage collector to delete the pods 04/18/24 16:51:17.813 Apr 18 16:51:17.874: INFO: Deleting ReplicationController wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317 took: 6.196373ms Apr 18 16:51:17.974: INFO: Terminating ReplicationController wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317 pods took: 100.103908ms STEP: Creating RC which spawns configmap-volume pods 04/18/24 16:51:21.678 Apr 18 16:51:21.693: INFO: Pod name wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b: Found 0 pods out of 5 Apr 18 16:51:26.703: INFO: Pod name wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b: Found 5 pods out of 5 STEP: Ensuring each pod is running 04/18/24 16:51:26.703 Apr 18 16:51:26.704: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-75v27" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:26.708: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-75v27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093071ms Apr 18 16:51:28.712: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-75v27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00845433s Apr 18 16:51:30.714: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-75v27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010047086s Apr 18 16:51:32.713: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-75v27": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009514768s Apr 18 16:51:34.712: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-75v27": Phase="Pending", Reason="", readiness=false. Elapsed: 8.008665669s Apr 18 16:51:36.714: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-75v27": Phase="Running", Reason="", readiness=true. Elapsed: 10.010164391s Apr 18 16:51:36.714: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-75v27" satisfied condition "running" Apr 18 16:51:36.714: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-h2kwb" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:36.717: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-h2kwb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.489191ms Apr 18 16:51:38.722: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-h2kwb": Phase="Running", Reason="", readiness=true. Elapsed: 2.008113333s Apr 18 16:51:38.722: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-h2kwb" satisfied condition "running" Apr 18 16:51:38.722: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-rpfsx" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:38.726: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-rpfsx": Phase="Running", Reason="", readiness=true. Elapsed: 4.18808ms Apr 18 16:51:38.726: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-rpfsx" satisfied condition "running" Apr 18 16:51:38.726: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-tbs8r" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:38.730: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-tbs8r": Phase="Running", Reason="", readiness=true. Elapsed: 3.984624ms Apr 18 16:51:38.730: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-tbs8r" satisfied condition "running" Apr 18 16:51:38.730: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-tjvcj" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:38.734: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-tjvcj": Phase="Running", Reason="", readiness=true. Elapsed: 3.633444ms Apr 18 16:51:38.734: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-tjvcj" satisfied condition "running" STEP: deleting ReplicationController wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b in namespace emptydir-wrapper-7019, will wait for the garbage collector to delete the pods 04/18/24 16:51:38.734 Apr 18 16:51:38.794: INFO: Deleting ReplicationController wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b took: 5.246499ms Apr 18 16:51:38.894: INFO: Terminating ReplicationController wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b pods took: 100.485018ms STEP: Creating RC which spawns configmap-volume pods 04/18/24 16:51:41.899 Apr 18 16:51:41.916: INFO: Pod name wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef: Found 0 pods out of 5 Apr 18 16:51:46.927: INFO: Pod name wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef: Found 5 pods out of 5 STEP: Ensuring each pod is running 04/18/24 16:51:46.927 Apr 18 16:51:46.927: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-4vb7c" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:46.931: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-4vb7c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.922919ms Apr 18 16:51:48.936: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-4vb7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008812863s Apr 18 16:51:50.935: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-4vb7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008354251s Apr 18 16:51:52.936: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-4vb7c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008744995s Apr 18 16:51:54.936: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-4vb7c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009112204s Apr 18 16:51:56.937: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-4vb7c": Phase="Running", Reason="", readiness=true. Elapsed: 10.009951084s Apr 18 16:51:56.937: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-4vb7c" satisfied condition "running" Apr 18 16:51:56.937: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-kcqfw" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:56.940: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-kcqfw": Phase="Running", Reason="", readiness=true. Elapsed: 3.45265ms Apr 18 16:51:56.940: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-kcqfw" satisfied condition "running" Apr 18 16:51:56.940: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-rqclh" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:56.944: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-rqclh": Phase="Pending", Reason="", readiness=false. Elapsed: 3.625716ms Apr 18 16:51:58.949: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-rqclh": Phase="Running", Reason="", readiness=true. Elapsed: 2.008183799s Apr 18 16:51:58.949: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-rqclh" satisfied condition "running" Apr 18 16:51:58.949: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-tv5xt" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:58.953: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-tv5xt": Phase="Running", Reason="", readiness=true. Elapsed: 3.773153ms Apr 18 16:51:58.953: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-tv5xt" satisfied condition "running" Apr 18 16:51:58.953: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-zqlfh" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:58.956: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-zqlfh": Phase="Running", Reason="", readiness=true. Elapsed: 3.836035ms Apr 18 16:51:58.956: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-zqlfh" satisfied condition "running" STEP: deleting ReplicationController wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef in namespace emptydir-wrapper-7019, will wait for the garbage collector to delete the pods 04/18/24 16:51:58.956 Apr 18 16:51:59.017: INFO: Deleting ReplicationController wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef took: 5.817184ms Apr 18 16:51:59.118: INFO: Terminating ReplicationController wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef pods took: 100.630327ms STEP: Cleaning up the configMaps 04/18/24 16:52:02.418 [AfterEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/node/init/init.go:32 Apr 18 16:52:02.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes tear down framework | framework.go:193 STEP: Destroying namespace "emptydir-wrapper-7019" for this suite. 04/18/24 16:52:02.638 ------------------------------ • [SLOW TEST] [59.255 seconds] [sig-storage] EmptyDir wrapper volumes test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] test/e2e/storage/empty_dir_wrapper.go:189 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] EmptyDir wrapper volumes set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:51:03.387 Apr 18 16:51:03.387: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper 04/18/24 16:51:03.388 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:51:03.397 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:51:03.4 [BeforeEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/metrics/init/init.go:31 [It] should not cause race condition when used for configmaps [Serial] [Conformance] test/e2e/storage/empty_dir_wrapper.go:189 STEP: Creating 50 configmaps 04/18/24 16:51:03.404 STEP: Creating RC which spawns configmap-volume pods 04/18/24 16:51:03.642 Apr 18 16:51:03.765: INFO: Pod name wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317: Found 5 pods out of 5 STEP: Ensuring each pod is running 04/18/24 16:51:03.765 Apr 18 16:51:03.765: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5hmts" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:03.792: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5hmts": Phase="Pending", Reason="", readiness=false. Elapsed: 26.658172ms Apr 18 16:51:05.798: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5hmts": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032783577s Apr 18 16:51:07.799: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5hmts": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033264118s Apr 18 16:51:09.797: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5hmts": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031738659s Apr 18 16:51:11.798: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5hmts": Phase="Pending", Reason="", readiness=false. Elapsed: 8.032940859s Apr 18 16:51:13.798: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5hmts": Phase="Pending", Reason="", readiness=false. Elapsed: 10.03257292s Apr 18 16:51:15.798: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5hmts": Phase="Pending", Reason="", readiness=false. Elapsed: 12.03229131s Apr 18 16:51:17.798: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5hmts": Phase="Running", Reason="", readiness=true. Elapsed: 14.032377646s Apr 18 16:51:17.798: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5hmts" satisfied condition "running" Apr 18 16:51:17.798: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5t99b" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:17.802: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5t99b": Phase="Running", Reason="", readiness=true. Elapsed: 4.064872ms Apr 18 16:51:17.802: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-5t99b" satisfied condition "running" Apr 18 16:51:17.802: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-crbvb" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:17.806: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-crbvb": Phase="Running", Reason="", readiness=true. Elapsed: 4.119906ms Apr 18 16:51:17.806: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-crbvb" satisfied condition "running" Apr 18 16:51:17.806: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-fmwkk" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:17.810: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-fmwkk": Phase="Running", Reason="", readiness=true. Elapsed: 3.805234ms Apr 18 16:51:17.810: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-fmwkk" satisfied condition "running" Apr 18 16:51:17.810: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-k57jp" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:17.813: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-k57jp": Phase="Running", Reason="", readiness=true. Elapsed: 3.556887ms Apr 18 16:51:17.813: INFO: Pod "wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317-k57jp" satisfied condition "running" STEP: deleting ReplicationController wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317 in namespace emptydir-wrapper-7019, will wait for the garbage collector to delete the pods 04/18/24 16:51:17.813 Apr 18 16:51:17.874: INFO: Deleting ReplicationController wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317 took: 6.196373ms Apr 18 16:51:17.974: INFO: Terminating ReplicationController wrapped-volume-race-bad98d89-6028-4c59-be1e-3cf7e0afc317 pods took: 100.103908ms STEP: Creating RC which spawns configmap-volume pods 04/18/24 16:51:21.678 Apr 18 16:51:21.693: INFO: Pod name wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b: Found 0 pods out of 5 Apr 18 16:51:26.703: INFO: Pod name wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b: Found 5 pods out of 5 STEP: Ensuring each pod is running 04/18/24 16:51:26.703 Apr 18 16:51:26.704: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-75v27" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:26.708: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-75v27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093071ms Apr 18 16:51:28.712: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-75v27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00845433s Apr 18 16:51:30.714: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-75v27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010047086s Apr 18 16:51:32.713: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-75v27": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009514768s Apr 18 16:51:34.712: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-75v27": Phase="Pending", Reason="", readiness=false. Elapsed: 8.008665669s Apr 18 16:51:36.714: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-75v27": Phase="Running", Reason="", readiness=true. Elapsed: 10.010164391s Apr 18 16:51:36.714: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-75v27" satisfied condition "running" Apr 18 16:51:36.714: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-h2kwb" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:36.717: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-h2kwb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.489191ms Apr 18 16:51:38.722: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-h2kwb": Phase="Running", Reason="", readiness=true. Elapsed: 2.008113333s Apr 18 16:51:38.722: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-h2kwb" satisfied condition "running" Apr 18 16:51:38.722: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-rpfsx" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:38.726: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-rpfsx": Phase="Running", Reason="", readiness=true. Elapsed: 4.18808ms Apr 18 16:51:38.726: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-rpfsx" satisfied condition "running" Apr 18 16:51:38.726: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-tbs8r" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:38.730: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-tbs8r": Phase="Running", Reason="", readiness=true. Elapsed: 3.984624ms Apr 18 16:51:38.730: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-tbs8r" satisfied condition "running" Apr 18 16:51:38.730: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-tjvcj" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:38.734: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-tjvcj": Phase="Running", Reason="", readiness=true. Elapsed: 3.633444ms Apr 18 16:51:38.734: INFO: Pod "wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b-tjvcj" satisfied condition "running" STEP: deleting ReplicationController wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b in namespace emptydir-wrapper-7019, will wait for the garbage collector to delete the pods 04/18/24 16:51:38.734 Apr 18 16:51:38.794: INFO: Deleting ReplicationController wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b took: 5.246499ms Apr 18 16:51:38.894: INFO: Terminating ReplicationController wrapped-volume-race-2d3efb0a-0bca-4e25-bd74-d12bd311fd7b pods took: 100.485018ms STEP: Creating RC which spawns configmap-volume pods 04/18/24 16:51:41.899 Apr 18 16:51:41.916: INFO: Pod name wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef: Found 0 pods out of 5 Apr 18 16:51:46.927: INFO: Pod name wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef: Found 5 pods out of 5 STEP: Ensuring each pod is running 04/18/24 16:51:46.927 Apr 18 16:51:46.927: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-4vb7c" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:46.931: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-4vb7c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.922919ms Apr 18 16:51:48.936: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-4vb7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008812863s Apr 18 16:51:50.935: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-4vb7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008354251s Apr 18 16:51:52.936: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-4vb7c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008744995s Apr 18 16:51:54.936: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-4vb7c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009112204s Apr 18 16:51:56.937: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-4vb7c": Phase="Running", Reason="", readiness=true. Elapsed: 10.009951084s Apr 18 16:51:56.937: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-4vb7c" satisfied condition "running" Apr 18 16:51:56.937: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-kcqfw" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:56.940: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-kcqfw": Phase="Running", Reason="", readiness=true. Elapsed: 3.45265ms Apr 18 16:51:56.940: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-kcqfw" satisfied condition "running" Apr 18 16:51:56.940: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-rqclh" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:56.944: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-rqclh": Phase="Pending", Reason="", readiness=false. Elapsed: 3.625716ms Apr 18 16:51:58.949: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-rqclh": Phase="Running", Reason="", readiness=true. Elapsed: 2.008183799s Apr 18 16:51:58.949: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-rqclh" satisfied condition "running" Apr 18 16:51:58.949: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-tv5xt" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:58.953: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-tv5xt": Phase="Running", Reason="", readiness=true. Elapsed: 3.773153ms Apr 18 16:51:58.953: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-tv5xt" satisfied condition "running" Apr 18 16:51:58.953: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-zqlfh" in namespace "emptydir-wrapper-7019" to be "running" Apr 18 16:51:58.956: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-zqlfh": Phase="Running", Reason="", readiness=true. Elapsed: 3.836035ms Apr 18 16:51:58.956: INFO: Pod "wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef-zqlfh" satisfied condition "running" STEP: deleting ReplicationController wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef in namespace emptydir-wrapper-7019, will wait for the garbage collector to delete the pods 04/18/24 16:51:58.956 Apr 18 16:51:59.017: INFO: Deleting ReplicationController wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef took: 5.817184ms Apr 18 16:51:59.118: INFO: Terminating ReplicationController wrapped-volume-race-0e537631-4e70-410e-9dfc-6fb3ceb361ef pods took: 100.630327ms STEP: Cleaning up the configMaps 04/18/24 16:52:02.418 [AfterEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/node/init/init.go:32 Apr 18 16:52:02.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes tear down framework | framework.go:193 STEP: Destroying namespace "emptydir-wrapper-7019" for this suite. 04/18/24 16:52:02.638 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/scheduling/predicates.go:704 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:52:02.659 Apr 18 16:52:02.659: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 16:52:02.66 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:52:02.673 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:52:02.676 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 16:52:02.679: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 16:52:02.685: INFO: Waiting for terminating namespaces to be deleted... Apr 18 16:52:02.687: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 16:52:02.692: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 16:52:02.692: INFO: Container loopdev ready: true, restart count 0 Apr 18 16:52:02.692: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:52:02.692: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 16:52:02.692: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:52:02.692: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 16:52:02.692: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 16:52:02.696: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 16:52:02.696: INFO: Container loopdev ready: true, restart count 0 Apr 18 16:52:02.696: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:52:02.696: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 16:52:02.696: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:52:02.696: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/scheduling/predicates.go:704 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 16:52:02.696 Apr 18 16:52:02.702: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-2451" to be "running" Apr 18 16:52:02.704: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376248ms Apr 18 16:52:04.708: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.006383476s Apr 18 16:52:04.708: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 16:52:04.711 STEP: Trying to apply a random label on the found node. 04/18/24 16:52:04.722 STEP: verifying the node has the label kubernetes.io/e2e-925dea21-ee14-4e0a-82c8-e5ed98c260b1 95 04/18/24 16:52:04.734 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled 04/18/24 16:52:04.737 Apr 18 16:52:04.741: INFO: Waiting up to 5m0s for pod "pod4" in namespace "sched-pred-2451" to be "not pending" Apr 18 16:52:04.744: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.627694ms Apr 18 16:52:06.748: INFO: Pod "pod4": Phase="Running", Reason="", readiness=false. Elapsed: 2.006455144s Apr 18 16:52:06.748: INFO: Pod "pod4" satisfied condition "not pending" STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.22.0.3 on the node which pod4 resides and expect not scheduled 04/18/24 16:52:06.748 Apr 18 16:52:06.753: INFO: Waiting up to 5m0s for pod "pod5" in namespace "sched-pred-2451" to be "not pending" Apr 18 16:52:06.756: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.059279ms Apr 18 16:52:08.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007294837s Apr 18 16:52:10.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0071418s Apr 18 16:52:12.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.006959838s Apr 18 16:52:14.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.007667796s Apr 18 16:52:16.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.007704683s Apr 18 16:52:18.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.007634188s Apr 18 16:52:20.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.007955408s Apr 18 16:52:22.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.00910175s Apr 18 16:52:24.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.007218561s Apr 18 16:52:26.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.008953217s Apr 18 16:52:28.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.006954545s Apr 18 16:52:30.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.008557614s Apr 18 16:52:32.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.006905059s Apr 18 16:52:34.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.007376542s Apr 18 16:52:36.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.007498857s Apr 18 16:52:38.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.0066726s Apr 18 16:52:40.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.006777088s Apr 18 16:52:42.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.008217331s Apr 18 16:52:44.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.006901722s Apr 18 16:52:46.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.008648558s Apr 18 16:52:48.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.008256815s Apr 18 16:52:50.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.008771573s Apr 18 16:52:52.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.008189361s Apr 18 16:52:54.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.007736284s Apr 18 16:52:56.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.007929162s Apr 18 16:52:58.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.006543066s Apr 18 16:53:00.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.007721755s Apr 18 16:53:02.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.008249984s Apr 18 16:53:04.764: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.010922776s Apr 18 16:53:06.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.008990326s Apr 18 16:53:08.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.008136844s Apr 18 16:53:10.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.008844824s Apr 18 16:53:12.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.008459038s Apr 18 16:53:14.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.007724734s Apr 18 16:53:16.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.008369505s Apr 18 16:53:18.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.007702774s Apr 18 16:53:20.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.008013531s Apr 18 16:53:22.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.008564047s Apr 18 16:53:24.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.007726976s Apr 18 16:53:26.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.008864801s Apr 18 16:53:28.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.007707508s Apr 18 16:53:30.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.008789043s Apr 18 16:53:32.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.008636597s Apr 18 16:53:34.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.007712168s Apr 18 16:53:36.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.008823718s Apr 18 16:53:38.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.007684109s Apr 18 16:53:40.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.008467466s Apr 18 16:53:42.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.008181681s Apr 18 16:53:44.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.008203951s Apr 18 16:53:46.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.009166955s Apr 18 16:53:48.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.007881846s Apr 18 16:53:50.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.008563291s Apr 18 16:53:52.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.008480973s Apr 18 16:53:54.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.007956076s Apr 18 16:53:56.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.008714708s Apr 18 16:53:58.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.007909206s Apr 18 16:54:00.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.007897691s Apr 18 16:54:02.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.008427626s Apr 18 16:54:04.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.006762508s Apr 18 16:54:06.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.008468067s Apr 18 16:54:08.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.008351584s Apr 18 16:54:10.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.008299466s Apr 18 16:54:12.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.00888851s Apr 18 16:54:14.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.007976673s Apr 18 16:54:16.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.009104827s Apr 18 16:54:18.759: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.006382241s Apr 18 16:54:20.759: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.006156613s Apr 18 16:54:22.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.007971106s Apr 18 16:54:24.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.007947088s Apr 18 16:54:26.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.008796528s Apr 18 16:54:28.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.007083029s Apr 18 16:54:30.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.009128916s Apr 18 16:54:32.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.008619143s Apr 18 16:54:34.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.007808518s Apr 18 16:54:36.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.0090203s Apr 18 16:54:38.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.007958458s Apr 18 16:54:40.763: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.009497197s Apr 18 16:54:42.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.00784773s Apr 18 16:54:44.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.007960929s Apr 18 16:54:46.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.008743802s Apr 18 16:54:48.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.008193817s Apr 18 16:54:50.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.008922963s Apr 18 16:54:52.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.008934822s Apr 18 16:54:54.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.008138776s Apr 18 16:54:56.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.009272034s Apr 18 16:54:58.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.008143505s Apr 18 16:55:00.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.008932169s Apr 18 16:55:02.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.008880002s Apr 18 16:55:04.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.007706063s Apr 18 16:55:06.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.009236449s Apr 18 16:55:08.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.008682964s Apr 18 16:55:10.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.0070607s Apr 18 16:55:12.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.007106694s Apr 18 16:55:14.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.008577123s Apr 18 16:55:16.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.008778041s Apr 18 16:55:18.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.007938198s Apr 18 16:55:20.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.008347086s Apr 18 16:55:22.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.008327387s Apr 18 16:55:24.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.007827479s Apr 18 16:55:26.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.008695275s Apr 18 16:55:28.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.0077193s Apr 18 16:55:30.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.009016295s Apr 18 16:55:32.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.007984248s Apr 18 16:55:34.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.007363966s Apr 18 16:55:36.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.007386246s Apr 18 16:55:38.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.007517359s Apr 18 16:55:40.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.007958585s Apr 18 16:55:42.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.008839408s Apr 18 16:55:44.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.007628211s Apr 18 16:55:46.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.007946999s Apr 18 16:55:48.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.006764136s Apr 18 16:55:50.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.007532329s Apr 18 16:55:52.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.008036126s Apr 18 16:55:54.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.008140965s Apr 18 16:55:56.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.008413145s Apr 18 16:55:58.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.006748794s Apr 18 16:56:00.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.008567053s Apr 18 16:56:02.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.00768384s Apr 18 16:56:04.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.006812161s Apr 18 16:56:06.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.007912451s Apr 18 16:56:08.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.006808615s Apr 18 16:56:10.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.007636235s Apr 18 16:56:12.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.007988654s Apr 18 16:56:14.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.007475576s Apr 18 16:56:16.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.008043771s Apr 18 16:56:18.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.008062088s Apr 18 16:56:20.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.008170109s Apr 18 16:56:22.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.008021678s Apr 18 16:56:24.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.007810224s Apr 18 16:56:26.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.008407239s Apr 18 16:56:28.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.008147115s Apr 18 16:56:30.763: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.00971935s Apr 18 16:56:32.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.007700383s Apr 18 16:56:34.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.008523231s Apr 18 16:56:36.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.007455851s Apr 18 16:56:38.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.008470714s Apr 18 16:56:40.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.007187527s Apr 18 16:56:42.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.008347197s Apr 18 16:56:44.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.007716931s Apr 18 16:56:46.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.008762112s Apr 18 16:56:48.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.007040331s Apr 18 16:56:50.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.00881916s Apr 18 16:56:52.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.007205272s Apr 18 16:56:54.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.008051787s Apr 18 16:56:56.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.006988938s Apr 18 16:56:58.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.007689609s Apr 18 16:57:00.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.008334278s Apr 18 16:57:02.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.008505732s Apr 18 16:57:04.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.007729297s Apr 18 16:57:06.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.008528099s Apr 18 16:57:06.764: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.011304736s STEP: removing the label kubernetes.io/e2e-925dea21-ee14-4e0a-82c8-e5ed98c260b1 off the node v126-worker2 04/18/24 16:57:06.764 STEP: verifying the node doesn't have the label kubernetes.io/e2e-925dea21-ee14-4e0a-82c8-e5ed98c260b1 04/18/24 16:57:06.781 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:57:06.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-2451" for this suite. 04/18/24 16:57:06.788 ------------------------------ • [SLOW TEST] [304.133 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/scheduling/predicates.go:704 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:52:02.659 Apr 18 16:52:02.659: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 04/18/24 16:52:02.66 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:52:02.673 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:52:02.676 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:97 Apr 18 16:52:02.679: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 18 16:52:02.685: INFO: Waiting for terminating namespaces to be deleted... Apr 18 16:52:02.687: INFO: Logging pods the apiserver thinks is on node v126-worker before test Apr 18 16:52:02.692: INFO: create-loop-devs-w9ldx from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 16:52:02.692: INFO: Container loopdev ready: true, restart count 0 Apr 18 16:52:02.692: INFO: kindnet-68nxx from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:52:02.692: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 16:52:02.692: INFO: kube-proxy-4wtz6 from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:52:02.692: INFO: Container kube-proxy ready: true, restart count 0 Apr 18 16:52:02.692: INFO: Logging pods the apiserver thinks is on node v126-worker2 before test Apr 18 16:52:02.696: INFO: create-loop-devs-xnxkn from kube-system started at 2024-04-18 11:44:50 +0000 UTC (1 container statuses recorded) Apr 18 16:52:02.696: INFO: Container loopdev ready: true, restart count 0 Apr 18 16:52:02.696: INFO: kindnet-wqc6h from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:52:02.696: INFO: Container kindnet-cni ready: true, restart count 0 Apr 18 16:52:02.696: INFO: kube-proxy-hjqqd from kube-system started at 2024-04-18 11:44:48 +0000 UTC (1 container statuses recorded) Apr 18 16:52:02.696: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/scheduling/predicates.go:704 STEP: Trying to launch a pod without a label to get a node which can launch it. 04/18/24 16:52:02.696 Apr 18 16:52:02.702: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-2451" to be "running" Apr 18 16:52:02.704: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376248ms Apr 18 16:52:04.708: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.006383476s Apr 18 16:52:04.708: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 04/18/24 16:52:04.711 STEP: Trying to apply a random label on the found node. 04/18/24 16:52:04.722 STEP: verifying the node has the label kubernetes.io/e2e-925dea21-ee14-4e0a-82c8-e5ed98c260b1 95 04/18/24 16:52:04.734 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled 04/18/24 16:52:04.737 Apr 18 16:52:04.741: INFO: Waiting up to 5m0s for pod "pod4" in namespace "sched-pred-2451" to be "not pending" Apr 18 16:52:04.744: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.627694ms Apr 18 16:52:06.748: INFO: Pod "pod4": Phase="Running", Reason="", readiness=false. Elapsed: 2.006455144s Apr 18 16:52:06.748: INFO: Pod "pod4" satisfied condition "not pending" STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.22.0.3 on the node which pod4 resides and expect not scheduled 04/18/24 16:52:06.748 Apr 18 16:52:06.753: INFO: Waiting up to 5m0s for pod "pod5" in namespace "sched-pred-2451" to be "not pending" Apr 18 16:52:06.756: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.059279ms Apr 18 16:52:08.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007294837s Apr 18 16:52:10.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0071418s Apr 18 16:52:12.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.006959838s Apr 18 16:52:14.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.007667796s Apr 18 16:52:16.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.007704683s Apr 18 16:52:18.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.007634188s Apr 18 16:52:20.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.007955408s Apr 18 16:52:22.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.00910175s Apr 18 16:52:24.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.007218561s Apr 18 16:52:26.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.008953217s Apr 18 16:52:28.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.006954545s Apr 18 16:52:30.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.008557614s Apr 18 16:52:32.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.006905059s Apr 18 16:52:34.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.007376542s Apr 18 16:52:36.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.007498857s Apr 18 16:52:38.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.0066726s Apr 18 16:52:40.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.006777088s Apr 18 16:52:42.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.008217331s Apr 18 16:52:44.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.006901722s Apr 18 16:52:46.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.008648558s Apr 18 16:52:48.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.008256815s Apr 18 16:52:50.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.008771573s Apr 18 16:52:52.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.008189361s Apr 18 16:52:54.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.007736284s Apr 18 16:52:56.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.007929162s Apr 18 16:52:58.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.006543066s Apr 18 16:53:00.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.007721755s Apr 18 16:53:02.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.008249984s Apr 18 16:53:04.764: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.010922776s Apr 18 16:53:06.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.008990326s Apr 18 16:53:08.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.008136844s Apr 18 16:53:10.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.008844824s Apr 18 16:53:12.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.008459038s Apr 18 16:53:14.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.007724734s Apr 18 16:53:16.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.008369505s Apr 18 16:53:18.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.007702774s Apr 18 16:53:20.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.008013531s Apr 18 16:53:22.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.008564047s Apr 18 16:53:24.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.007726976s Apr 18 16:53:26.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.008864801s Apr 18 16:53:28.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.007707508s Apr 18 16:53:30.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.008789043s Apr 18 16:53:32.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.008636597s Apr 18 16:53:34.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.007712168s Apr 18 16:53:36.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.008823718s Apr 18 16:53:38.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.007684109s Apr 18 16:53:40.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.008467466s Apr 18 16:53:42.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.008181681s Apr 18 16:53:44.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.008203951s Apr 18 16:53:46.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.009166955s Apr 18 16:53:48.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.007881846s Apr 18 16:53:50.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.008563291s Apr 18 16:53:52.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.008480973s Apr 18 16:53:54.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.007956076s Apr 18 16:53:56.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.008714708s Apr 18 16:53:58.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.007909206s Apr 18 16:54:00.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.007897691s Apr 18 16:54:02.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.008427626s Apr 18 16:54:04.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.006762508s Apr 18 16:54:06.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.008468067s Apr 18 16:54:08.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.008351584s Apr 18 16:54:10.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.008299466s Apr 18 16:54:12.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.00888851s Apr 18 16:54:14.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.007976673s Apr 18 16:54:16.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.009104827s Apr 18 16:54:18.759: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.006382241s Apr 18 16:54:20.759: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.006156613s Apr 18 16:54:22.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.007971106s Apr 18 16:54:24.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.007947088s Apr 18 16:54:26.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.008796528s Apr 18 16:54:28.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.007083029s Apr 18 16:54:30.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.009128916s Apr 18 16:54:32.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.008619143s Apr 18 16:54:34.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.007808518s Apr 18 16:54:36.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.0090203s Apr 18 16:54:38.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.007958458s Apr 18 16:54:40.763: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.009497197s Apr 18 16:54:42.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.00784773s Apr 18 16:54:44.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.007960929s Apr 18 16:54:46.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.008743802s Apr 18 16:54:48.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.008193817s Apr 18 16:54:50.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.008922963s Apr 18 16:54:52.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.008934822s Apr 18 16:54:54.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.008138776s Apr 18 16:54:56.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.009272034s Apr 18 16:54:58.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.008143505s Apr 18 16:55:00.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.008932169s Apr 18 16:55:02.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.008880002s Apr 18 16:55:04.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.007706063s Apr 18 16:55:06.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.009236449s Apr 18 16:55:08.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.008682964s Apr 18 16:55:10.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.0070607s Apr 18 16:55:12.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.007106694s Apr 18 16:55:14.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.008577123s Apr 18 16:55:16.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.008778041s Apr 18 16:55:18.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.007938198s Apr 18 16:55:20.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.008347086s Apr 18 16:55:22.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.008327387s Apr 18 16:55:24.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.007827479s Apr 18 16:55:26.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.008695275s Apr 18 16:55:28.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.0077193s Apr 18 16:55:30.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.009016295s Apr 18 16:55:32.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.007984248s Apr 18 16:55:34.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.007363966s Apr 18 16:55:36.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.007386246s Apr 18 16:55:38.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.007517359s Apr 18 16:55:40.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.007958585s Apr 18 16:55:42.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.008839408s Apr 18 16:55:44.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.007628211s Apr 18 16:55:46.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.007946999s Apr 18 16:55:48.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.006764136s Apr 18 16:55:50.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.007532329s Apr 18 16:55:52.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.008036126s Apr 18 16:55:54.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.008140965s Apr 18 16:55:56.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.008413145s Apr 18 16:55:58.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.006748794s Apr 18 16:56:00.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.008567053s Apr 18 16:56:02.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.00768384s Apr 18 16:56:04.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.006812161s Apr 18 16:56:06.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.007912451s Apr 18 16:56:08.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.006808615s Apr 18 16:56:10.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.007636235s Apr 18 16:56:12.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.007988654s Apr 18 16:56:14.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.007475576s Apr 18 16:56:16.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.008043771s Apr 18 16:56:18.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.008062088s Apr 18 16:56:20.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.008170109s Apr 18 16:56:22.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.008021678s Apr 18 16:56:24.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.007810224s Apr 18 16:56:26.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.008407239s Apr 18 16:56:28.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.008147115s Apr 18 16:56:30.763: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.00971935s Apr 18 16:56:32.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.007700383s Apr 18 16:56:34.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.008523231s Apr 18 16:56:36.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.007455851s Apr 18 16:56:38.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.008470714s Apr 18 16:56:40.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.007187527s Apr 18 16:56:42.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.008347197s Apr 18 16:56:44.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.007716931s Apr 18 16:56:46.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.008762112s Apr 18 16:56:48.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.007040331s Apr 18 16:56:50.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.00881916s Apr 18 16:56:52.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.007205272s Apr 18 16:56:54.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.008051787s Apr 18 16:56:56.760: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.006988938s Apr 18 16:56:58.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.007689609s Apr 18 16:57:00.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.008334278s Apr 18 16:57:02.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.008505732s Apr 18 16:57:04.761: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.007729297s Apr 18 16:57:06.762: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.008528099s Apr 18 16:57:06.764: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.011304736s STEP: removing the label kubernetes.io/e2e-925dea21-ee14-4e0a-82c8-e5ed98c260b1 off the node v126-worker2 04/18/24 16:57:06.764 STEP: verifying the node doesn't have the label kubernetes.io/e2e-925dea21-ee14-4e0a-82c8-e5ed98c260b1 04/18/24 16:57:06.781 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:57:06.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:88 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-pred-2451" for this suite. 04/18/24 16:57:06.788 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] test/e2e/apps/daemon_set.go:177 [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:57:06.793 Apr 18 16:57:06.793: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/18/24 16:57:06.795 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:57:06.804 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:57:06.808 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should run and stop simple daemon [Conformance] test/e2e/apps/daemon_set.go:177 STEP: Creating simple DaemonSet "daemon-set" 04/18/24 16:57:06.826 STEP: Check that daemon pods launch on every node of the cluster. 04/18/24 16:57:06.831 Apr 18 16:57:06.834: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:57:06.837: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:57:06.837: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:57:07.840: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:57:07.843: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:57:07.843: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:57:08.843: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:57:08.847: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 18 16:57:08.847: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Stop a daemon pod, check that the daemon pod is revived. 04/18/24 16:57:08.85 Apr 18 16:57:08.866: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:57:08.869: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 18 16:57:08.869: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:57:09.875: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:57:09.879: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 18 16:57:09.879: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:57:10.875: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:57:10.880: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 18 16:57:10.880: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:57:11.875: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:57:11.878: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 18 16:57:11.878: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:57:12.874: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:57:12.877: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 18 16:57:12.877: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/18/24 16:57:12.88 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7906, will wait for the garbage collector to delete the pods 04/18/24 16:57:12.88 Apr 18 16:57:12.939: INFO: Deleting DaemonSet.extensions daemon-set took: 4.988663ms Apr 18 16:57:13.039: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.162496ms Apr 18 16:57:15.043: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:57:15.043: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 18 16:57:15.046: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"50665"},"items":null} Apr 18 16:57:15.049: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"50665"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:57:15.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-7906" for this suite. 04/18/24 16:57:15.064 ------------------------------ • [SLOW TEST] [8.275 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] test/e2e/apps/daemon_set.go:177 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:57:06.793 Apr 18 16:57:06.793: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 04/18/24 16:57:06.795 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:57:06.804 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:57:06.808 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:157 [It] should run and stop simple daemon [Conformance] test/e2e/apps/daemon_set.go:177 STEP: Creating simple DaemonSet "daemon-set" 04/18/24 16:57:06.826 STEP: Check that daemon pods launch on every node of the cluster. 04/18/24 16:57:06.831 Apr 18 16:57:06.834: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:57:06.837: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:57:06.837: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:57:07.840: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:57:07.843: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:57:07.843: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:57:08.843: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:57:08.847: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 18 16:57:08.847: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Stop a daemon pod, check that the daemon pod is revived. 04/18/24 16:57:08.85 Apr 18 16:57:08.866: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:57:08.869: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 18 16:57:08.869: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:57:09.875: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:57:09.879: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 18 16:57:09.879: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:57:10.875: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:57:10.880: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 18 16:57:10.880: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:57:11.875: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:57:11.878: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Apr 18 16:57:11.878: INFO: Node v126-worker is running 0 daemon pod, expected 1 Apr 18 16:57:12.874: INFO: DaemonSet pods can't tolerate node v126-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 18 16:57:12.877: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Apr 18 16:57:12.877: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:122 STEP: Deleting DaemonSet "daemon-set" 04/18/24 16:57:12.88 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7906, will wait for the garbage collector to delete the pods 04/18/24 16:57:12.88 Apr 18 16:57:12.939: INFO: Deleting DaemonSet.extensions daemon-set took: 4.988663ms Apr 18 16:57:13.039: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.162496ms Apr 18 16:57:15.043: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Apr 18 16:57:15.043: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Apr 18 16:57:15.046: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"50665"},"items":null} Apr 18 16:57:15.049: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"50665"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:57:15.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "daemonsets-7906" for this suite. 04/18/24 16:57:15.064 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] test/e2e/scheduling/preemption.go:130 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:57:15.07 Apr 18 16:57:15.070: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/18/24 16:57:15.072 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:57:15.083 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:57:15.087 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 18 16:57:15.103: INFO: Waiting up to 1m0s for all nodes to be ready Apr 18 16:58:15.126: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] test/e2e/scheduling/preemption.go:130 STEP: Create pods that use 4/5 of node resources. 04/18/24 16:58:15.128 Apr 18 16:58:15.148: INFO: Created pod: pod0-0-sched-preemption-low-priority Apr 18 16:58:15.153: INFO: Created pod: pod0-1-sched-preemption-medium-priority Apr 18 16:58:15.170: INFO: Created pod: pod1-0-sched-preemption-medium-priority Apr 18 16:58:15.175: INFO: Created pod: pod1-1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. 04/18/24 16:58:15.175 Apr 18 16:58:15.175: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-8304" to be "running" Apr 18 16:58:15.179: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 3.674398ms Apr 18 16:58:17.183: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.00784332s Apr 18 16:58:17.183: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Apr 18 16:58:17.183: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-8304" to be "running" Apr 18 16:58:17.186: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.748879ms Apr 18 16:58:17.186: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Apr 18 16:58:17.186: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-8304" to be "running" Apr 18 16:58:17.189: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.773444ms Apr 18 16:58:17.189: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Apr 18 16:58:17.189: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-8304" to be "running" Apr 18 16:58:17.191: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.480999ms Apr 18 16:58:17.191: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" STEP: Run a high priority pod that has same requirements as that of lower priority pod 04/18/24 16:58:17.191 Apr 18 16:58:17.196: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-8304" to be "running" Apr 18 16:58:17.199: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.787552ms Apr 18 16:58:19.202: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00574264s Apr 18 16:58:21.202: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006242442s Apr 18 16:58:23.204: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007658964s Apr 18 16:58:25.204: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.007627498s Apr 18 16:58:25.204: INFO: Pod "preemptor-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:58:25.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-8304" for this suite. 04/18/24 16:58:25.253 ------------------------------ • [SLOW TEST] [70.188 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] test/e2e/scheduling/preemption.go:130 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] set up framework | framework.go:178 STEP: Creating a kubernetes client 04/18/24 16:57:15.07 Apr 18 16:57:15.070: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 04/18/24 16:57:15.072 STEP: Waiting for a default service account to be provisioned in namespace 04/18/24 16:57:15.083 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 04/18/24 16:57:15.087 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:97 Apr 18 16:57:15.103: INFO: Waiting up to 1m0s for all nodes to be ready Apr 18 16:58:15.126: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] test/e2e/scheduling/preemption.go:130 STEP: Create pods that use 4/5 of node resources. 04/18/24 16:58:15.128 Apr 18 16:58:15.148: INFO: Created pod: pod0-0-sched-preemption-low-priority Apr 18 16:58:15.153: INFO: Created pod: pod0-1-sched-preemption-medium-priority Apr 18 16:58:15.170: INFO: Created pod: pod1-0-sched-preemption-medium-priority Apr 18 16:58:15.175: INFO: Created pod: pod1-1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. 04/18/24 16:58:15.175 Apr 18 16:58:15.175: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-8304" to be "running" Apr 18 16:58:15.179: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 3.674398ms Apr 18 16:58:17.183: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.00784332s Apr 18 16:58:17.183: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Apr 18 16:58:17.183: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-8304" to be "running" Apr 18 16:58:17.186: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.748879ms Apr 18 16:58:17.186: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Apr 18 16:58:17.186: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-8304" to be "running" Apr 18 16:58:17.189: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.773444ms Apr 18 16:58:17.189: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Apr 18 16:58:17.189: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-8304" to be "running" Apr 18 16:58:17.191: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.480999ms Apr 18 16:58:17.191: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" STEP: Run a high priority pod that has same requirements as that of lower priority pod 04/18/24 16:58:17.191 Apr 18 16:58:17.196: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-8304" to be "running" Apr 18 16:58:17.199: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.787552ms Apr 18 16:58:19.202: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00574264s Apr 18 16:58:21.202: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006242442s Apr 18 16:58:23.204: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007658964s Apr 18 16:58:25.204: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 8.007627498s Apr 18 16:58:25.204: INFO: Pod "preemptor-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/node/init/init.go:32 Apr 18 16:58:25.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:84 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] dump namespaces | framework.go:196 [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] tear down framework | framework.go:193 STEP: Destroying namespace "sched-preemption-8304" for this suite. 04/18/24 16:58:25.253 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [SynchronizedAfterSuite] test/e2e/e2e.go:88 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 Apr 18 16:58:25.291: INFO: Running AfterSuite actions on node 1 Apr 18 16:58:25.291: INFO: Skipping dumping logs from cluster ------------------------------ [SynchronizedAfterSuite] PASSED [0.000 seconds] [SynchronizedAfterSuite] test/e2e/e2e.go:88 Begin Captured GinkgoWriter Output >> [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:88 Apr 18 16:58:25.291: INFO: Running AfterSuite actions on node 1 Apr 18 16:58:25.291: INFO: Skipping dumping logs from cluster << End Captured GinkgoWriter Output ------------------------------ [ReportAfterSuite] Kubernetes e2e suite report test/e2e/e2e_test.go:153 [ReportAfterSuite] TOP-LEVEL test/e2e/e2e_test.go:153 ------------------------------ [ReportAfterSuite] PASSED [0.000 seconds] [ReportAfterSuite] Kubernetes e2e suite report test/e2e/e2e_test.go:153 Begin Captured GinkgoWriter Output >> [ReportAfterSuite] TOP-LEVEL test/e2e/e2e_test.go:153 << End Captured GinkgoWriter Output ------------------------------ [ReportAfterSuite] Kubernetes e2e JUnit report test/e2e/framework/test_context.go:529 [ReportAfterSuite] TOP-LEVEL test/e2e/framework/test_context.go:529 ------------------------------ [ReportAfterSuite] PASSED [0.186 seconds] [ReportAfterSuite] Kubernetes e2e JUnit report test/e2e/framework/test_context.go:529 Begin Captured GinkgoWriter Output >> [ReportAfterSuite] TOP-LEVEL test/e2e/framework/test_context.go:529 << End Captured GinkgoWriter Output ------------------------------ Ran 23 of 7069 Specs in 726.417 seconds SUCCESS! -- 23 Passed | 0 Failed | 0 Pending | 7046 Skipped PASS Ginkgo ran 1 suite in 12m6.919683694s Test Suite Passed