I0202 23:49:09.062304 16 e2e.go:116] Starting e2e run "b2e28416-3bfe-4118-930d-967a72c80932" on Ginkgo node 1 Feb 2 23:49:09.076: INFO: Enabling in-tree volume drivers Running Suite: Kubernetes e2e suite - /usr/local/bin ==================================================== Random Seed: 1675381748 - will randomize all specs Will run 21 of 7066 specs ------------------------------ [SynchronizedBeforeSuite] test/e2e/e2e.go:76 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 {"msg":"Test Suite starting","completed":0,"skipped":0,"failed":0} Feb 2 23:49:09.230: INFO: >>> kubeConfig: /home/xtesting/.kube/config Feb 2 23:49:09.232: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 2 23:49:09.253: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 2 23:49:09.275: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 2 23:49:09.275: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 2 23:49:09.275: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 2 23:49:09.281: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Feb 2 23:49:09.281: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Feb 2 23:49:09.281: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 2 23:49:09.281: INFO: e2e test version: v1.25.6 Feb 2 23:49:09.282: INFO: kube-apiserver version: v1.25.2 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 Feb 2 23:49:09.282: INFO: >>> kubeConfig: /home/xtesting/.kube/config Feb 2 23:49:09.286: INFO: Cluster IP family: ipv4 ------------------------------ [SynchronizedBeforeSuite] PASSED [0.056 seconds] [SynchronizedBeforeSuite] test/e2e/e2e.go:76 Begin Captured GinkgoWriter Output >> [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 Feb 2 23:49:09.230: INFO: >>> kubeConfig: /home/xtesting/.kube/config Feb 2 23:49:09.232: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 2 23:49:09.253: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 2 23:49:09.275: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 2 23:49:09.275: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 2 23:49:09.275: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 2 23:49:09.281: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Feb 2 23:49:09.281: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Feb 2 23:49:09.281: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 2 23:49:09.281: INFO: e2e test version: v1.25.6 Feb 2 23:49:09.282: INFO: kube-apiserver version: v1.25.2 [SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:76 Feb 2 23:49:09.282: INFO: >>> kubeConfig: /home/xtesting/.kube/config Feb 2 23:49:09.286: INFO: Cluster IP family: ipv4 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] test/e2e/scheduling/preemption.go:125 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:49:09.354 Feb 2 23:49:09.355: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 02/02/23 23:49:09.356 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:49:09.366 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:49:09.37 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Feb 2 23:49:09.384: INFO: Waiting up to 1m0s for all nodes to be ready Feb 2 23:50:09.410: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] test/e2e/scheduling/preemption.go:125 STEP: Create pods that use 4/5 of node resources. 02/02/23 23:50:09.413 Feb 2 23:50:09.436: INFO: Created pod: pod0-0-sched-preemption-low-priority Feb 2 23:50:09.442: INFO: Created pod: pod0-1-sched-preemption-medium-priority Feb 2 23:50:09.458: INFO: Created pod: pod1-0-sched-preemption-medium-priority Feb 2 23:50:09.462: INFO: Created pod: pod1-1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. 02/02/23 23:50:09.462 Feb 2 23:50:09.462: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-4624" to be "running" Feb 2 23:50:09.465: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.627427ms Feb 2 23:50:11.470: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007616986s Feb 2 23:50:13.471: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008474094s Feb 2 23:50:15.469: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007181411s Feb 2 23:50:17.468: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.006214855s Feb 2 23:50:17.469: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Feb 2 23:50:17.469: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-4624" to be "running" Feb 2 23:50:17.472: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.978768ms Feb 2 23:50:17.472: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Feb 2 23:50:17.472: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-4624" to be "running" Feb 2 23:50:17.474: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.561552ms Feb 2 23:50:19.478: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.006756876s Feb 2 23:50:19.478: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Feb 2 23:50:19.478: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-4624" to be "running" Feb 2 23:50:19.481: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.772175ms Feb 2 23:50:19.481: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" STEP: Run a high priority pod that has same requirements as that of lower priority pod 02/02/23 23:50:19.481 Feb 2 23:50:19.486: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-4624" to be "running" Feb 2 23:50:19.490: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.016003ms Feb 2 23:50:21.494: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007514456s Feb 2 23:50:23.494: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.00787119s Feb 2 23:50:23.494: INFO: Pod "preemptor-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Feb 2 23:50:23.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-4624" for this suite. 02/02/23 23:50:23.514 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","completed":1,"skipped":478,"failed":0} ------------------------------ • [SLOW TEST] [74.208 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] test/e2e/scheduling/preemption.go:125 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:49:09.354 Feb 2 23:49:09.355: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 02/02/23 23:49:09.356 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:49:09.366 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:49:09.37 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Feb 2 23:49:09.384: INFO: Waiting up to 1m0s for all nodes to be ready Feb 2 23:50:09.410: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] test/e2e/scheduling/preemption.go:125 STEP: Create pods that use 4/5 of node resources. 02/02/23 23:50:09.413 Feb 2 23:50:09.436: INFO: Created pod: pod0-0-sched-preemption-low-priority Feb 2 23:50:09.442: INFO: Created pod: pod0-1-sched-preemption-medium-priority Feb 2 23:50:09.458: INFO: Created pod: pod1-0-sched-preemption-medium-priority Feb 2 23:50:09.462: INFO: Created pod: pod1-1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. 02/02/23 23:50:09.462 Feb 2 23:50:09.462: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-4624" to be "running" Feb 2 23:50:09.465: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.627427ms Feb 2 23:50:11.470: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007616986s Feb 2 23:50:13.471: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008474094s Feb 2 23:50:15.469: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007181411s Feb 2 23:50:17.468: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 8.006214855s Feb 2 23:50:17.469: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Feb 2 23:50:17.469: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-4624" to be "running" Feb 2 23:50:17.472: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.978768ms Feb 2 23:50:17.472: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Feb 2 23:50:17.472: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-4624" to be "running" Feb 2 23:50:17.474: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.561552ms Feb 2 23:50:19.478: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.006756876s Feb 2 23:50:19.478: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Feb 2 23:50:19.478: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-4624" to be "running" Feb 2 23:50:19.481: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.772175ms Feb 2 23:50:19.481: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" STEP: Run a high priority pod that has same requirements as that of lower priority pod 02/02/23 23:50:19.481 Feb 2 23:50:19.486: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-4624" to be "running" Feb 2 23:50:19.490: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.016003ms Feb 2 23:50:21.494: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007514456s Feb 2 23:50:23.494: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.00787119s Feb 2 23:50:23.494: INFO: Pod "preemptor-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Feb 2 23:50:23.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-4624" for this suite. 02/02/23 23:50:23.514 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance] test/e2e/apps/daemon_set.go:822 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:50:23.566 Feb 2 23:50:23.566: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 02/02/23 23:50:23.568 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:50:23.578 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:50:23.582 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should list and delete a collection of DaemonSets [Conformance] test/e2e/apps/daemon_set.go:822 STEP: Creating simple DaemonSet "daemon-set" 02/02/23 23:50:23.601 STEP: Check that daemon pods launch on every node of the cluster. 02/02/23 23:50:23.605 Feb 2 23:50:23.609: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:50:23.612: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:50:23.612: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:50:24.617: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:50:24.621: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:50:24.621: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:50:25.617: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:50:25.621: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Feb 2 23:50:25.621: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: listing all DeamonSets 02/02/23 23:50:25.625 STEP: DeleteCollection of the DaemonSets 02/02/23 23:50:25.628 STEP: Verify that ReplicaSets have been deleted 02/02/23 23:50:25.634 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 Feb 2 23:50:25.647: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1431968"},"items":null} Feb 2 23:50:25.651: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1431968"},"items":[{"metadata":{"name":"daemon-set-srjzq","generateName":"daemon-set-","namespace":"daemonsets-7169","uid":"8cb2c292-01cd-4f47-82db-4cea59cd030b","resourceVersion":"1431962","creationTimestamp":"2023-02-02T23:50:23Z","labels":{"controller-revision-hash":"858775dd56","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"51139ea3-933b-452b-8a28-15a75b17f73a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-02T23:50:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"51139ea3-933b-452b-8a28-15a75b17f73a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-02T23:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.107\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-4c7cf","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-4c7cf","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"v125-worker2","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["v125-worker2"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-02-02T23:50:23Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-02-02T23:50:25Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-02-02T23:50:25Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-02-02T23:50:23Z"}],"hostIP":"172.20.0.13","podIP":"10.244.1.107","podIPs":[{"ip":"10.244.1.107"}],"startTime":"2023-02-02T23:50:23Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-02-02T23:50:24Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://31de3f1e389ec9d9edf68d7fbf940cee73d2373ac3279c6f3993bc77d50725cc","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-tfm72","generateName":"daemon-set-","namespace":"daemonsets-7169","uid":"21cf8659-467b-49d9-b40a-18df2b1e9687","resourceVersion":"1431964","creationTimestamp":"2023-02-02T23:50:23Z","labels":{"controller-revision-hash":"858775dd56","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"51139ea3-933b-452b-8a28-15a75b17f73a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-02T23:50:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"51139ea3-933b-452b-8a28-15a75b17f73a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-02T23:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.28\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-djp97","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-djp97","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"v125-worker","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["v125-worker"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-02-02T23:50:23Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-02-02T23:50:25Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-02-02T23:50:25Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-02-02T23:50:23Z"}],"hostIP":"172.20.0.10","podIP":"10.244.2.28","podIPs":[{"ip":"10.244.2.28"}],"startTime":"2023-02-02T23:50:23Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-02-02T23:50:24Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://e28d777134f3660a62d4620dd6acacebcb8ae0b052fc48c5fa3c508771a7e787","started":true}],"qosClass":"BestEffort"}}]} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Feb 2 23:50:25.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7169" for this suite. 02/02/23 23:50:25.666 {"msg":"PASSED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","completed":2,"skipped":511,"failed":0} ------------------------------ • [2.105 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should list and delete a collection of DaemonSets [Conformance] test/e2e/apps/daemon_set.go:822 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:50:23.566 Feb 2 23:50:23.566: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 02/02/23 23:50:23.568 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:50:23.578 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:50:23.582 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should list and delete a collection of DaemonSets [Conformance] test/e2e/apps/daemon_set.go:822 STEP: Creating simple DaemonSet "daemon-set" 02/02/23 23:50:23.601 STEP: Check that daemon pods launch on every node of the cluster. 02/02/23 23:50:23.605 Feb 2 23:50:23.609: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:50:23.612: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:50:23.612: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:50:24.617: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:50:24.621: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:50:24.621: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:50:25.617: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:50:25.621: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Feb 2 23:50:25.621: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: listing all DeamonSets 02/02/23 23:50:25.625 STEP: DeleteCollection of the DaemonSets 02/02/23 23:50:25.628 STEP: Verify that ReplicaSets have been deleted 02/02/23 23:50:25.634 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 Feb 2 23:50:25.647: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1431968"},"items":null} Feb 2 23:50:25.651: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1431968"},"items":[{"metadata":{"name":"daemon-set-srjzq","generateName":"daemon-set-","namespace":"daemonsets-7169","uid":"8cb2c292-01cd-4f47-82db-4cea59cd030b","resourceVersion":"1431962","creationTimestamp":"2023-02-02T23:50:23Z","labels":{"controller-revision-hash":"858775dd56","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"51139ea3-933b-452b-8a28-15a75b17f73a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-02T23:50:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"51139ea3-933b-452b-8a28-15a75b17f73a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-02T23:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.107\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-4c7cf","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-4c7cf","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"v125-worker2","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["v125-worker2"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-02-02T23:50:23Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-02-02T23:50:25Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-02-02T23:50:25Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-02-02T23:50:23Z"}],"hostIP":"172.20.0.13","podIP":"10.244.1.107","podIPs":[{"ip":"10.244.1.107"}],"startTime":"2023-02-02T23:50:23Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-02-02T23:50:24Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://31de3f1e389ec9d9edf68d7fbf940cee73d2373ac3279c6f3993bc77d50725cc","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-tfm72","generateName":"daemon-set-","namespace":"daemonsets-7169","uid":"21cf8659-467b-49d9-b40a-18df2b1e9687","resourceVersion":"1431964","creationTimestamp":"2023-02-02T23:50:23Z","labels":{"controller-revision-hash":"858775dd56","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"51139ea3-933b-452b-8a28-15a75b17f73a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-02T23:50:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"51139ea3-933b-452b-8a28-15a75b17f73a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-02T23:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.28\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-djp97","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-djp97","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"v125-worker","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["v125-worker"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-02-02T23:50:23Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-02-02T23:50:25Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-02-02T23:50:25Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-02-02T23:50:23Z"}],"hostIP":"172.20.0.10","podIP":"10.244.2.28","podIPs":[{"ip":"10.244.2.28"}],"startTime":"2023-02-02T23:50:23Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-02-02T23:50:24Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://e28d777134f3660a62d4620dd6acacebcb8ae0b052fc48c5fa3c508771a7e787","started":true}],"qosClass":"BestEffort"}}]} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Feb 2 23:50:25.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7169" for this suite. 02/02/23 23:50:25.666 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] test/e2e/scheduling/predicates.go:326 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:50:25.694 Feb 2 23:50:25.694: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/02/23 23:50:25.696 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:50:25.706 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:50:25.71 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 2 23:50:25.715: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 2 23:50:25.723: INFO: Waiting for terminating namespaces to be deleted... Feb 2 23:50:25.726: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 2 23:50:25.732: INFO: daemon-set-tfm72 from daemonsets-7169 started at 2023-02-02 23:50:23 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.732: INFO: Container app ready: true, restart count 0 Feb 2 23:50:25.732: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.732: INFO: Container loopdev ready: true, restart count 0 Feb 2 23:50:25.732: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.732: INFO: Container kindnet-cni ready: true, restart count 0 Feb 2 23:50:25.732: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.732: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 23:50:25.732: INFO: pod0-1-sched-preemption-medium-priority from sched-preemption-4624 started at 2023-02-02 23:50:15 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.732: INFO: Container pod0-1-sched-preemption-medium-priority ready: true, restart count 0 Feb 2 23:50:25.732: INFO: preemptor-pod from sched-preemption-4624 started at 2023-02-02 23:50:21 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.732: INFO: Container preemptor-pod ready: true, restart count 0 Feb 2 23:50:25.732: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 2 23:50:25.739: INFO: daemon-set-srjzq from daemonsets-7169 started at 2023-02-02 23:50:23 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.739: INFO: Container app ready: true, restart count 0 Feb 2 23:50:25.739: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.739: INFO: Container loopdev ready: true, restart count 0 Feb 2 23:50:25.739: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.739: INFO: Container kindnet-cni ready: true, restart count 0 Feb 2 23:50:25.739: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.739: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 23:50:25.739: INFO: pod1-0-sched-preemption-medium-priority from sched-preemption-4624 started at 2023-02-02 23:50:17 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.739: INFO: Container pod1-0-sched-preemption-medium-priority ready: true, restart count 0 Feb 2 23:50:25.739: INFO: pod1-1-sched-preemption-medium-priority from sched-preemption-4624 started at 2023-02-02 23:50:17 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.739: INFO: Container pod1-1-sched-preemption-medium-priority ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] test/e2e/scheduling/predicates.go:326 STEP: verifying the node has the label node v125-worker 02/02/23 23:50:25.759 STEP: verifying the node has the label node v125-worker2 02/02/23 23:50:25.773 Feb 2 23:50:25.784: INFO: Pod daemon-set-srjzq requesting resource cpu=0m on Node v125-worker2 Feb 2 23:50:25.784: INFO: Pod daemon-set-tfm72 requesting resource cpu=0m on Node v125-worker Feb 2 23:50:25.784: INFO: Pod create-loop-devs-d5nrm requesting resource cpu=0m on Node v125-worker Feb 2 23:50:25.784: INFO: Pod create-loop-devs-tlwgp requesting resource cpu=0m on Node v125-worker2 Feb 2 23:50:25.784: INFO: Pod kindnet-h8fbr requesting resource cpu=100m on Node v125-worker2 Feb 2 23:50:25.784: INFO: Pod kindnet-xhfn8 requesting resource cpu=100m on Node v125-worker Feb 2 23:50:25.784: INFO: Pod kube-proxy-bvl9x requesting resource cpu=0m on Node v125-worker2 Feb 2 23:50:25.784: INFO: Pod kube-proxy-pxrcg requesting resource cpu=0m on Node v125-worker Feb 2 23:50:25.784: INFO: Pod pod0-1-sched-preemption-medium-priority requesting resource cpu=0m on Node v125-worker Feb 2 23:50:25.784: INFO: Pod pod1-0-sched-preemption-medium-priority requesting resource cpu=0m on Node v125-worker2 Feb 2 23:50:25.784: INFO: Pod pod1-1-sched-preemption-medium-priority requesting resource cpu=0m on Node v125-worker2 Feb 2 23:50:25.784: INFO: Pod preemptor-pod requesting resource cpu=0m on Node v125-worker STEP: Starting Pods to consume most of the cluster CPU. 02/02/23 23:50:25.784 Feb 2 23:50:25.785: INFO: Creating a pod which consumes cpu=61530m on Node v125-worker Feb 2 23:50:25.792: INFO: Creating a pod which consumes cpu=61530m on Node v125-worker2 Feb 2 23:50:25.796: INFO: Waiting up to 5m0s for pod "filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d" in namespace "sched-pred-9191" to be "running" Feb 2 23:50:25.798: INFO: Pod "filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.38614ms Feb 2 23:50:27.804: INFO: Pod "filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d": Phase="Running", Reason="", readiness=true. Elapsed: 2.00765169s Feb 2 23:50:27.804: INFO: Pod "filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d" satisfied condition "running" Feb 2 23:50:27.804: INFO: Waiting up to 5m0s for pod "filler-pod-8bf1d962-73b7-43d3-8a2f-3109cb1ad0e3" in namespace "sched-pred-9191" to be "running" Feb 2 23:50:27.807: INFO: Pod "filler-pod-8bf1d962-73b7-43d3-8a2f-3109cb1ad0e3": Phase="Running", Reason="", readiness=true. Elapsed: 3.28608ms Feb 2 23:50:27.807: INFO: Pod "filler-pod-8bf1d962-73b7-43d3-8a2f-3109cb1ad0e3" satisfied condition "running" STEP: Creating another pod that requires unavailable amount of CPU. 02/02/23 23:50:27.807 STEP: Considering event: Type = [Normal], Name = [filler-pod-8bf1d962-73b7-43d3-8a2f-3109cb1ad0e3.174026e4ddf55b93], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9191/filler-pod-8bf1d962-73b7-43d3-8a2f-3109cb1ad0e3 to v125-worker2] 02/02/23 23:50:27.812 STEP: Considering event: Type = [Normal], Name = [filler-pod-8bf1d962-73b7-43d3-8a2f-3109cb1ad0e3.174026e50214ccbb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/02/23 23:50:27.812 STEP: Considering event: Type = [Normal], Name = [filler-pod-8bf1d962-73b7-43d3-8a2f-3109cb1ad0e3.174026e502cb79b5], Reason = [Created], Message = [Created container filler-pod-8bf1d962-73b7-43d3-8a2f-3109cb1ad0e3] 02/02/23 23:50:27.812 STEP: Considering event: Type = [Normal], Name = [filler-pod-8bf1d962-73b7-43d3-8a2f-3109cb1ad0e3.174026e51117001d], Reason = [Started], Message = [Started container filler-pod-8bf1d962-73b7-43d3-8a2f-3109cb1ad0e3] 02/02/23 23:50:27.812 STEP: Considering event: Type = [Normal], Name = [filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d.174026e4ddb3f6cf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9191/filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d to v125-worker] 02/02/23 23:50:27.812 STEP: Considering event: Type = [Normal], Name = [filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d.174026e5028e25b7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/02/23 23:50:27.812 STEP: Considering event: Type = [Normal], Name = [filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d.174026e50337bd2b], Reason = [Created], Message = [Created container filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d] 02/02/23 23:50:27.812 STEP: Considering event: Type = [Normal], Name = [filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d.174026e510a8209f], Reason = [Started], Message = [Started container filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d] 02/02/23 23:50:27.813 STEP: Considering event: Type = [Warning], Name = [additional-pod.174026e556516742], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient cpu. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.] 02/02/23 23:50:27.823 STEP: removing the label node off the node v125-worker 02/02/23 23:50:28.823 STEP: verifying the node doesn't have the label node 02/02/23 23:50:28.832 STEP: removing the label node off the node v125-worker2 02/02/23 23:50:28.835 STEP: verifying the node doesn't have the label node 02/02/23 23:50:28.846 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 2 23:50:28.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9191" for this suite. 02/02/23 23:50:28.85 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","completed":3,"skipped":887,"failed":0} ------------------------------ • [3.160 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] test/e2e/scheduling/predicates.go:326 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:50:25.694 Feb 2 23:50:25.694: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/02/23 23:50:25.696 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:50:25.706 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:50:25.71 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 2 23:50:25.715: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 2 23:50:25.723: INFO: Waiting for terminating namespaces to be deleted... Feb 2 23:50:25.726: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 2 23:50:25.732: INFO: daemon-set-tfm72 from daemonsets-7169 started at 2023-02-02 23:50:23 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.732: INFO: Container app ready: true, restart count 0 Feb 2 23:50:25.732: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.732: INFO: Container loopdev ready: true, restart count 0 Feb 2 23:50:25.732: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.732: INFO: Container kindnet-cni ready: true, restart count 0 Feb 2 23:50:25.732: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.732: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 23:50:25.732: INFO: pod0-1-sched-preemption-medium-priority from sched-preemption-4624 started at 2023-02-02 23:50:15 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.732: INFO: Container pod0-1-sched-preemption-medium-priority ready: true, restart count 0 Feb 2 23:50:25.732: INFO: preemptor-pod from sched-preemption-4624 started at 2023-02-02 23:50:21 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.732: INFO: Container preemptor-pod ready: true, restart count 0 Feb 2 23:50:25.732: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 2 23:50:25.739: INFO: daemon-set-srjzq from daemonsets-7169 started at 2023-02-02 23:50:23 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.739: INFO: Container app ready: true, restart count 0 Feb 2 23:50:25.739: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.739: INFO: Container loopdev ready: true, restart count 0 Feb 2 23:50:25.739: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.739: INFO: Container kindnet-cni ready: true, restart count 0 Feb 2 23:50:25.739: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.739: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 23:50:25.739: INFO: pod1-0-sched-preemption-medium-priority from sched-preemption-4624 started at 2023-02-02 23:50:17 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.739: INFO: Container pod1-0-sched-preemption-medium-priority ready: true, restart count 0 Feb 2 23:50:25.739: INFO: pod1-1-sched-preemption-medium-priority from sched-preemption-4624 started at 2023-02-02 23:50:17 +0000 UTC (1 container statuses recorded) Feb 2 23:50:25.739: INFO: Container pod1-1-sched-preemption-medium-priority ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] test/e2e/scheduling/predicates.go:326 STEP: verifying the node has the label node v125-worker 02/02/23 23:50:25.759 STEP: verifying the node has the label node v125-worker2 02/02/23 23:50:25.773 Feb 2 23:50:25.784: INFO: Pod daemon-set-srjzq requesting resource cpu=0m on Node v125-worker2 Feb 2 23:50:25.784: INFO: Pod daemon-set-tfm72 requesting resource cpu=0m on Node v125-worker Feb 2 23:50:25.784: INFO: Pod create-loop-devs-d5nrm requesting resource cpu=0m on Node v125-worker Feb 2 23:50:25.784: INFO: Pod create-loop-devs-tlwgp requesting resource cpu=0m on Node v125-worker2 Feb 2 23:50:25.784: INFO: Pod kindnet-h8fbr requesting resource cpu=100m on Node v125-worker2 Feb 2 23:50:25.784: INFO: Pod kindnet-xhfn8 requesting resource cpu=100m on Node v125-worker Feb 2 23:50:25.784: INFO: Pod kube-proxy-bvl9x requesting resource cpu=0m on Node v125-worker2 Feb 2 23:50:25.784: INFO: Pod kube-proxy-pxrcg requesting resource cpu=0m on Node v125-worker Feb 2 23:50:25.784: INFO: Pod pod0-1-sched-preemption-medium-priority requesting resource cpu=0m on Node v125-worker Feb 2 23:50:25.784: INFO: Pod pod1-0-sched-preemption-medium-priority requesting resource cpu=0m on Node v125-worker2 Feb 2 23:50:25.784: INFO: Pod pod1-1-sched-preemption-medium-priority requesting resource cpu=0m on Node v125-worker2 Feb 2 23:50:25.784: INFO: Pod preemptor-pod requesting resource cpu=0m on Node v125-worker STEP: Starting Pods to consume most of the cluster CPU. 02/02/23 23:50:25.784 Feb 2 23:50:25.785: INFO: Creating a pod which consumes cpu=61530m on Node v125-worker Feb 2 23:50:25.792: INFO: Creating a pod which consumes cpu=61530m on Node v125-worker2 Feb 2 23:50:25.796: INFO: Waiting up to 5m0s for pod "filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d" in namespace "sched-pred-9191" to be "running" Feb 2 23:50:25.798: INFO: Pod "filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.38614ms Feb 2 23:50:27.804: INFO: Pod "filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d": Phase="Running", Reason="", readiness=true. Elapsed: 2.00765169s Feb 2 23:50:27.804: INFO: Pod "filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d" satisfied condition "running" Feb 2 23:50:27.804: INFO: Waiting up to 5m0s for pod "filler-pod-8bf1d962-73b7-43d3-8a2f-3109cb1ad0e3" in namespace "sched-pred-9191" to be "running" Feb 2 23:50:27.807: INFO: Pod "filler-pod-8bf1d962-73b7-43d3-8a2f-3109cb1ad0e3": Phase="Running", Reason="", readiness=true. Elapsed: 3.28608ms Feb 2 23:50:27.807: INFO: Pod "filler-pod-8bf1d962-73b7-43d3-8a2f-3109cb1ad0e3" satisfied condition "running" STEP: Creating another pod that requires unavailable amount of CPU. 02/02/23 23:50:27.807 STEP: Considering event: Type = [Normal], Name = [filler-pod-8bf1d962-73b7-43d3-8a2f-3109cb1ad0e3.174026e4ddf55b93], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9191/filler-pod-8bf1d962-73b7-43d3-8a2f-3109cb1ad0e3 to v125-worker2] 02/02/23 23:50:27.812 STEP: Considering event: Type = [Normal], Name = [filler-pod-8bf1d962-73b7-43d3-8a2f-3109cb1ad0e3.174026e50214ccbb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/02/23 23:50:27.812 STEP: Considering event: Type = [Normal], Name = [filler-pod-8bf1d962-73b7-43d3-8a2f-3109cb1ad0e3.174026e502cb79b5], Reason = [Created], Message = [Created container filler-pod-8bf1d962-73b7-43d3-8a2f-3109cb1ad0e3] 02/02/23 23:50:27.812 STEP: Considering event: Type = [Normal], Name = [filler-pod-8bf1d962-73b7-43d3-8a2f-3109cb1ad0e3.174026e51117001d], Reason = [Started], Message = [Started container filler-pod-8bf1d962-73b7-43d3-8a2f-3109cb1ad0e3] 02/02/23 23:50:27.812 STEP: Considering event: Type = [Normal], Name = [filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d.174026e4ddb3f6cf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9191/filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d to v125-worker] 02/02/23 23:50:27.812 STEP: Considering event: Type = [Normal], Name = [filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d.174026e5028e25b7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.8" already present on machine] 02/02/23 23:50:27.812 STEP: Considering event: Type = [Normal], Name = [filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d.174026e50337bd2b], Reason = [Created], Message = [Created container filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d] 02/02/23 23:50:27.812 STEP: Considering event: Type = [Normal], Name = [filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d.174026e510a8209f], Reason = [Started], Message = [Started container filler-pod-d1c64683-a300-4bdc-8044-c184b735e82d] 02/02/23 23:50:27.813 STEP: Considering event: Type = [Warning], Name = [additional-pod.174026e556516742], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient cpu. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.] 02/02/23 23:50:27.823 STEP: removing the label node off the node v125-worker 02/02/23 23:50:28.823 STEP: verifying the node doesn't have the label node 02/02/23 23:50:28.832 STEP: removing the label node off the node v125-worker2 02/02/23 23:50:28.835 STEP: verifying the node doesn't have the label node 02/02/23 23:50:28.846 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 2 23:50:28.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9191" for this suite. 02/02/23 23:50:28.85 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance] test/e2e/apps/daemon_set.go:861 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:50:28.859 Feb 2 23:50:28.859: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 02/02/23 23:50:28.86 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:50:28.867 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:50:28.87 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should verify changes to a daemon set status [Conformance] test/e2e/apps/daemon_set.go:861 STEP: Creating simple DaemonSet "daemon-set" 02/02/23 23:50:28.886 STEP: Check that daemon pods launch on every node of the cluster. 02/02/23 23:50:28.89 Feb 2 23:50:28.892: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:50:28.894: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:50:28.894: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:50:29.899: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:50:29.903: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:50:29.903: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:50:30.899: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:50:30.902: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Feb 2 23:50:30.902: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Getting /status 02/02/23 23:50:30.905 Feb 2 23:50:30.909: INFO: Daemon Set daemon-set has Conditions: [] STEP: updating the DaemonSet Status 02/02/23 23:50:30.909 Feb 2 23:50:30.917: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the daemon set status to be updated 02/02/23 23:50:30.917 Feb 2 23:50:30.920: INFO: Observed &DaemonSet event: ADDED Feb 2 23:50:30.920: INFO: Observed &DaemonSet event: MODIFIED Feb 2 23:50:30.920: INFO: Observed &DaemonSet event: MODIFIED Feb 2 23:50:30.920: INFO: Observed &DaemonSet event: MODIFIED Feb 2 23:50:30.920: INFO: Found daemon set daemon-set in namespace daemonsets-6903 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Feb 2 23:50:30.920: INFO: Daemon set daemon-set has an updated status STEP: patching the DaemonSet Status 02/02/23 23:50:30.92 STEP: watching for the daemon set status to be patched 02/02/23 23:50:30.928 Feb 2 23:50:30.929: INFO: Observed &DaemonSet event: ADDED Feb 2 23:50:30.930: INFO: Observed &DaemonSet event: MODIFIED Feb 2 23:50:30.930: INFO: Observed &DaemonSet event: MODIFIED Feb 2 23:50:30.930: INFO: Observed &DaemonSet event: MODIFIED Feb 2 23:50:30.930: INFO: Observed daemon set daemon-set in namespace daemonsets-6903 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Feb 2 23:50:30.930: INFO: Observed &DaemonSet event: MODIFIED Feb 2 23:50:30.930: INFO: Found daemon set daemon-set in namespace daemonsets-6903 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] Feb 2 23:50:30.930: INFO: Daemon set daemon-set has a patched status [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 STEP: Deleting DaemonSet "daemon-set" 02/02/23 23:50:30.933 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6903, will wait for the garbage collector to delete the pods 02/02/23 23:50:30.933 Feb 2 23:50:30.992: INFO: Deleting DaemonSet.extensions daemon-set took: 4.600131ms Feb 2 23:50:31.092: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.336973ms Feb 2 23:50:33.395: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:50:33.395: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Feb 2 23:50:33.398: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1432125"},"items":null} Feb 2 23:50:33.400: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1432125"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Feb 2 23:50:33.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6903" for this suite. 02/02/23 23:50:33.413 {"msg":"PASSED [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","completed":4,"skipped":956,"failed":0} ------------------------------ • [4.559 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should verify changes to a daemon set status [Conformance] test/e2e/apps/daemon_set.go:861 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:50:28.859 Feb 2 23:50:28.859: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 02/02/23 23:50:28.86 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:50:28.867 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:50:28.87 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should verify changes to a daemon set status [Conformance] test/e2e/apps/daemon_set.go:861 STEP: Creating simple DaemonSet "daemon-set" 02/02/23 23:50:28.886 STEP: Check that daemon pods launch on every node of the cluster. 02/02/23 23:50:28.89 Feb 2 23:50:28.892: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:50:28.894: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:50:28.894: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:50:29.899: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:50:29.903: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:50:29.903: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:50:30.899: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:50:30.902: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Feb 2 23:50:30.902: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Getting /status 02/02/23 23:50:30.905 Feb 2 23:50:30.909: INFO: Daemon Set daemon-set has Conditions: [] STEP: updating the DaemonSet Status 02/02/23 23:50:30.909 Feb 2 23:50:30.917: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the daemon set status to be updated 02/02/23 23:50:30.917 Feb 2 23:50:30.920: INFO: Observed &DaemonSet event: ADDED Feb 2 23:50:30.920: INFO: Observed &DaemonSet event: MODIFIED Feb 2 23:50:30.920: INFO: Observed &DaemonSet event: MODIFIED Feb 2 23:50:30.920: INFO: Observed &DaemonSet event: MODIFIED Feb 2 23:50:30.920: INFO: Found daemon set daemon-set in namespace daemonsets-6903 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Feb 2 23:50:30.920: INFO: Daemon set daemon-set has an updated status STEP: patching the DaemonSet Status 02/02/23 23:50:30.92 STEP: watching for the daemon set status to be patched 02/02/23 23:50:30.928 Feb 2 23:50:30.929: INFO: Observed &DaemonSet event: ADDED Feb 2 23:50:30.930: INFO: Observed &DaemonSet event: MODIFIED Feb 2 23:50:30.930: INFO: Observed &DaemonSet event: MODIFIED Feb 2 23:50:30.930: INFO: Observed &DaemonSet event: MODIFIED Feb 2 23:50:30.930: INFO: Observed daemon set daemon-set in namespace daemonsets-6903 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Feb 2 23:50:30.930: INFO: Observed &DaemonSet event: MODIFIED Feb 2 23:50:30.930: INFO: Found daemon set daemon-set in namespace daemonsets-6903 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] Feb 2 23:50:30.930: INFO: Daemon set daemon-set has a patched status [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 STEP: Deleting DaemonSet "daemon-set" 02/02/23 23:50:30.933 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6903, will wait for the garbage collector to delete the pods 02/02/23 23:50:30.933 Feb 2 23:50:30.992: INFO: Deleting DaemonSet.extensions daemon-set took: 4.600131ms Feb 2 23:50:31.092: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.336973ms Feb 2 23:50:33.395: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:50:33.395: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Feb 2 23:50:33.398: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1432125"},"items":null} Feb 2 23:50:33.400: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1432125"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Feb 2 23:50:33.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6903" for this suite. 02/02/23 23:50:33.413 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] test/e2e/storage/empty_dir_wrapper.go:189 [BeforeEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:50:33.458 Feb 2 23:50:33.458: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper 02/02/23 23:50:33.46 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:50:33.47 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:50:33.474 [It] should not cause race condition when used for configmaps [Serial] [Conformance] test/e2e/storage/empty_dir_wrapper.go:189 STEP: Creating 50 configmaps 02/02/23 23:50:33.478 STEP: Creating RC which spawns configmap-volume pods 02/02/23 23:50:33.715 Feb 2 23:50:33.817: INFO: Pod name wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906: Found 5 pods out of 5 STEP: Ensuring each pod is running 02/02/23 23:50:33.817 Feb 2 23:50:33.818: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-7cc8z" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:50:33.865: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-7cc8z": Phase="Pending", Reason="", readiness=false. Elapsed: 47.89301ms Feb 2 23:50:35.871: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-7cc8z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053310633s Feb 2 23:50:37.871: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-7cc8z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053276464s Feb 2 23:50:39.871: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-7cc8z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053647477s Feb 2 23:50:41.871: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-7cc8z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053795556s Feb 2 23:50:43.871: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-7cc8z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.053162716s Feb 2 23:50:45.870: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-7cc8z": Phase="Pending", Reason="", readiness=false. Elapsed: 12.052294496s Feb 2 23:50:47.871: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-7cc8z": Phase="Running", Reason="", readiness=true. Elapsed: 14.053672829s Feb 2 23:50:47.871: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-7cc8z" satisfied condition "running" Feb 2 23:50:47.871: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-l9nll" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:50:47.875: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-l9nll": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059028ms Feb 2 23:50:49.880: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-l9nll": Phase="Running", Reason="", readiness=true. Elapsed: 2.008401087s Feb 2 23:50:49.880: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-l9nll" satisfied condition "running" Feb 2 23:50:49.880: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-p2xbr" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:50:49.886: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-p2xbr": Phase="Running", Reason="", readiness=true. Elapsed: 6.022722ms Feb 2 23:50:49.886: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-p2xbr" satisfied condition "running" Feb 2 23:50:49.886: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-qjc8h" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:50:49.892: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-qjc8h": Phase="Running", Reason="", readiness=true. Elapsed: 5.682775ms Feb 2 23:50:49.892: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-qjc8h" satisfied condition "running" Feb 2 23:50:49.892: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-v64vw" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:50:49.895: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-v64vw": Phase="Running", Reason="", readiness=true. Elapsed: 3.517456ms Feb 2 23:50:49.895: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-v64vw" satisfied condition "running" STEP: deleting ReplicationController wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906 in namespace emptydir-wrapper-5150, will wait for the garbage collector to delete the pods 02/02/23 23:50:49.895 Feb 2 23:50:49.955: INFO: Deleting ReplicationController wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906 took: 4.948999ms Feb 2 23:50:50.055: INFO: Terminating ReplicationController wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906 pods took: 100.838992ms STEP: Creating RC which spawns configmap-volume pods 02/02/23 23:50:52.361 Feb 2 23:50:52.375: INFO: Pod name wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827: Found 0 pods out of 5 Feb 2 23:50:57.384: INFO: Pod name wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827: Found 5 pods out of 5 STEP: Ensuring each pod is running 02/02/23 23:50:57.384 Feb 2 23:50:57.385: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-4t226" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:50:57.389: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-4t226": Phase="Pending", Reason="", readiness=false. Elapsed: 3.798488ms Feb 2 23:50:59.393: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-4t226": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008368212s Feb 2 23:51:01.394: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-4t226": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009488164s Feb 2 23:51:03.394: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-4t226": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009562569s Feb 2 23:51:05.394: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-4t226": Phase="Pending", Reason="", readiness=false. Elapsed: 8.00883786s Feb 2 23:51:07.395: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-4t226": Phase="Running", Reason="", readiness=true. Elapsed: 10.01004645s Feb 2 23:51:07.395: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-4t226" satisfied condition "running" Feb 2 23:51:07.395: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-dmqvs" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:51:07.399: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-dmqvs": Phase="Running", Reason="", readiness=true. Elapsed: 3.732803ms Feb 2 23:51:07.399: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-dmqvs" satisfied condition "running" Feb 2 23:51:07.399: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-qdj7t" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:51:07.402: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-qdj7t": Phase="Running", Reason="", readiness=true. Elapsed: 3.739784ms Feb 2 23:51:07.402: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-qdj7t" satisfied condition "running" Feb 2 23:51:07.402: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-wfz46" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:51:07.406: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-wfz46": Phase="Running", Reason="", readiness=true. Elapsed: 3.765381ms Feb 2 23:51:07.406: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-wfz46" satisfied condition "running" Feb 2 23:51:07.406: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-xmpdf" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:51:07.410: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-xmpdf": Phase="Running", Reason="", readiness=true. Elapsed: 3.891252ms Feb 2 23:51:07.410: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-xmpdf" satisfied condition "running" STEP: deleting ReplicationController wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827 in namespace emptydir-wrapper-5150, will wait for the garbage collector to delete the pods 02/02/23 23:51:07.41 Feb 2 23:51:07.470: INFO: Deleting ReplicationController wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827 took: 5.49581ms Feb 2 23:51:07.571: INFO: Terminating ReplicationController wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827 pods took: 101.136648ms STEP: Creating RC which spawns configmap-volume pods 02/02/23 23:51:09.877 Feb 2 23:51:09.894: INFO: Pod name wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22: Found 0 pods out of 5 Feb 2 23:51:14.903: INFO: Pod name wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22: Found 5 pods out of 5 STEP: Ensuring each pod is running 02/02/23 23:51:14.903 Feb 2 23:51:14.903: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-4hjg2" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:51:14.907: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-4hjg2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016976ms Feb 2 23:51:16.912: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-4hjg2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009069656s Feb 2 23:51:18.913: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-4hjg2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009607195s Feb 2 23:51:20.914: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-4hjg2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01050858s Feb 2 23:51:22.914: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-4hjg2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.010563369s Feb 2 23:51:24.912: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-4hjg2": Phase="Running", Reason="", readiness=true. Elapsed: 10.009086478s Feb 2 23:51:24.913: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-4hjg2" satisfied condition "running" Feb 2 23:51:24.913: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-5wfdq" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:51:24.917: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-5wfdq": Phase="Running", Reason="", readiness=true. Elapsed: 4.093231ms Feb 2 23:51:24.917: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-5wfdq" satisfied condition "running" Feb 2 23:51:24.917: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-9wsvf" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:51:24.921: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-9wsvf": Phase="Running", Reason="", readiness=true. Elapsed: 4.112148ms Feb 2 23:51:24.921: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-9wsvf" satisfied condition "running" Feb 2 23:51:24.921: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-j9cgs" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:51:24.925: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-j9cgs": Phase="Running", Reason="", readiness=true. Elapsed: 3.781139ms Feb 2 23:51:24.925: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-j9cgs" satisfied condition "running" Feb 2 23:51:24.925: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-nwnp9" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:51:24.929: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-nwnp9": Phase="Running", Reason="", readiness=true. Elapsed: 3.75261ms Feb 2 23:51:24.929: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-nwnp9" satisfied condition "running" STEP: deleting ReplicationController wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22 in namespace emptydir-wrapper-5150, will wait for the garbage collector to delete the pods 02/02/23 23:51:24.929 Feb 2 23:51:24.989: INFO: Deleting ReplicationController wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22 took: 5.589065ms Feb 2 23:51:25.090: INFO: Terminating ReplicationController wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22 pods took: 101.029374ms STEP: Cleaning up the configMaps 02/02/23 23:51:27.491 [AfterEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/framework.go:187 Feb 2 23:51:27.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5150" for this suite. 02/02/23 23:51:27.7 {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","completed":5,"skipped":1525,"failed":0} ------------------------------ • [SLOW TEST] [54.246 seconds] [sig-storage] EmptyDir wrapper volumes test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] test/e2e/storage/empty_dir_wrapper.go:189 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:50:33.458 Feb 2 23:50:33.458: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper 02/02/23 23:50:33.46 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:50:33.47 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:50:33.474 [It] should not cause race condition when used for configmaps [Serial] [Conformance] test/e2e/storage/empty_dir_wrapper.go:189 STEP: Creating 50 configmaps 02/02/23 23:50:33.478 STEP: Creating RC which spawns configmap-volume pods 02/02/23 23:50:33.715 Feb 2 23:50:33.817: INFO: Pod name wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906: Found 5 pods out of 5 STEP: Ensuring each pod is running 02/02/23 23:50:33.817 Feb 2 23:50:33.818: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-7cc8z" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:50:33.865: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-7cc8z": Phase="Pending", Reason="", readiness=false. Elapsed: 47.89301ms Feb 2 23:50:35.871: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-7cc8z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053310633s Feb 2 23:50:37.871: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-7cc8z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053276464s Feb 2 23:50:39.871: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-7cc8z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053647477s Feb 2 23:50:41.871: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-7cc8z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053795556s Feb 2 23:50:43.871: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-7cc8z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.053162716s Feb 2 23:50:45.870: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-7cc8z": Phase="Pending", Reason="", readiness=false. Elapsed: 12.052294496s Feb 2 23:50:47.871: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-7cc8z": Phase="Running", Reason="", readiness=true. Elapsed: 14.053672829s Feb 2 23:50:47.871: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-7cc8z" satisfied condition "running" Feb 2 23:50:47.871: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-l9nll" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:50:47.875: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-l9nll": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059028ms Feb 2 23:50:49.880: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-l9nll": Phase="Running", Reason="", readiness=true. Elapsed: 2.008401087s Feb 2 23:50:49.880: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-l9nll" satisfied condition "running" Feb 2 23:50:49.880: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-p2xbr" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:50:49.886: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-p2xbr": Phase="Running", Reason="", readiness=true. Elapsed: 6.022722ms Feb 2 23:50:49.886: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-p2xbr" satisfied condition "running" Feb 2 23:50:49.886: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-qjc8h" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:50:49.892: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-qjc8h": Phase="Running", Reason="", readiness=true. Elapsed: 5.682775ms Feb 2 23:50:49.892: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-qjc8h" satisfied condition "running" Feb 2 23:50:49.892: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-v64vw" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:50:49.895: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-v64vw": Phase="Running", Reason="", readiness=true. Elapsed: 3.517456ms Feb 2 23:50:49.895: INFO: Pod "wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906-v64vw" satisfied condition "running" STEP: deleting ReplicationController wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906 in namespace emptydir-wrapper-5150, will wait for the garbage collector to delete the pods 02/02/23 23:50:49.895 Feb 2 23:50:49.955: INFO: Deleting ReplicationController wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906 took: 4.948999ms Feb 2 23:50:50.055: INFO: Terminating ReplicationController wrapped-volume-race-0b520b89-0250-4bb9-b96d-a5f71f929906 pods took: 100.838992ms STEP: Creating RC which spawns configmap-volume pods 02/02/23 23:50:52.361 Feb 2 23:50:52.375: INFO: Pod name wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827: Found 0 pods out of 5 Feb 2 23:50:57.384: INFO: Pod name wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827: Found 5 pods out of 5 STEP: Ensuring each pod is running 02/02/23 23:50:57.384 Feb 2 23:50:57.385: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-4t226" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:50:57.389: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-4t226": Phase="Pending", Reason="", readiness=false. Elapsed: 3.798488ms Feb 2 23:50:59.393: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-4t226": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008368212s Feb 2 23:51:01.394: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-4t226": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009488164s Feb 2 23:51:03.394: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-4t226": Phase="Pending", Reason="", readiness=false. Elapsed: 6.009562569s Feb 2 23:51:05.394: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-4t226": Phase="Pending", Reason="", readiness=false. Elapsed: 8.00883786s Feb 2 23:51:07.395: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-4t226": Phase="Running", Reason="", readiness=true. Elapsed: 10.01004645s Feb 2 23:51:07.395: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-4t226" satisfied condition "running" Feb 2 23:51:07.395: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-dmqvs" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:51:07.399: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-dmqvs": Phase="Running", Reason="", readiness=true. Elapsed: 3.732803ms Feb 2 23:51:07.399: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-dmqvs" satisfied condition "running" Feb 2 23:51:07.399: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-qdj7t" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:51:07.402: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-qdj7t": Phase="Running", Reason="", readiness=true. Elapsed: 3.739784ms Feb 2 23:51:07.402: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-qdj7t" satisfied condition "running" Feb 2 23:51:07.402: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-wfz46" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:51:07.406: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-wfz46": Phase="Running", Reason="", readiness=true. Elapsed: 3.765381ms Feb 2 23:51:07.406: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-wfz46" satisfied condition "running" Feb 2 23:51:07.406: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-xmpdf" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:51:07.410: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-xmpdf": Phase="Running", Reason="", readiness=true. Elapsed: 3.891252ms Feb 2 23:51:07.410: INFO: Pod "wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827-xmpdf" satisfied condition "running" STEP: deleting ReplicationController wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827 in namespace emptydir-wrapper-5150, will wait for the garbage collector to delete the pods 02/02/23 23:51:07.41 Feb 2 23:51:07.470: INFO: Deleting ReplicationController wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827 took: 5.49581ms Feb 2 23:51:07.571: INFO: Terminating ReplicationController wrapped-volume-race-60fa6865-4b19-4850-9a9b-2b1b05552827 pods took: 101.136648ms STEP: Creating RC which spawns configmap-volume pods 02/02/23 23:51:09.877 Feb 2 23:51:09.894: INFO: Pod name wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22: Found 0 pods out of 5 Feb 2 23:51:14.903: INFO: Pod name wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22: Found 5 pods out of 5 STEP: Ensuring each pod is running 02/02/23 23:51:14.903 Feb 2 23:51:14.903: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-4hjg2" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:51:14.907: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-4hjg2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016976ms Feb 2 23:51:16.912: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-4hjg2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009069656s Feb 2 23:51:18.913: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-4hjg2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009607195s Feb 2 23:51:20.914: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-4hjg2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01050858s Feb 2 23:51:22.914: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-4hjg2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.010563369s Feb 2 23:51:24.912: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-4hjg2": Phase="Running", Reason="", readiness=true. Elapsed: 10.009086478s Feb 2 23:51:24.913: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-4hjg2" satisfied condition "running" Feb 2 23:51:24.913: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-5wfdq" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:51:24.917: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-5wfdq": Phase="Running", Reason="", readiness=true. Elapsed: 4.093231ms Feb 2 23:51:24.917: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-5wfdq" satisfied condition "running" Feb 2 23:51:24.917: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-9wsvf" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:51:24.921: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-9wsvf": Phase="Running", Reason="", readiness=true. Elapsed: 4.112148ms Feb 2 23:51:24.921: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-9wsvf" satisfied condition "running" Feb 2 23:51:24.921: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-j9cgs" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:51:24.925: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-j9cgs": Phase="Running", Reason="", readiness=true. Elapsed: 3.781139ms Feb 2 23:51:24.925: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-j9cgs" satisfied condition "running" Feb 2 23:51:24.925: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-nwnp9" in namespace "emptydir-wrapper-5150" to be "running" Feb 2 23:51:24.929: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-nwnp9": Phase="Running", Reason="", readiness=true. Elapsed: 3.75261ms Feb 2 23:51:24.929: INFO: Pod "wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22-nwnp9" satisfied condition "running" STEP: deleting ReplicationController wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22 in namespace emptydir-wrapper-5150, will wait for the garbage collector to delete the pods 02/02/23 23:51:24.929 Feb 2 23:51:24.989: INFO: Deleting ReplicationController wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22 took: 5.589065ms Feb 2 23:51:25.090: INFO: Terminating ReplicationController wrapped-volume-race-f182e649-3889-432e-98f4-5c0476b5ce22 pods took: 101.029374ms STEP: Cleaning up the configMaps 02/02/23 23:51:27.491 [AfterEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/framework.go:187 Feb 2 23:51:27.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5150" for this suite. 02/02/23 23:51:27.7 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/apps/daemon_set.go:373 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:51:27.755 Feb 2 23:51:27.756: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 02/02/23 23:51:27.757 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:51:27.767 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:51:27.771 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/apps/daemon_set.go:373 Feb 2 23:51:27.790: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. 02/02/23 23:51:27.795 Feb 2 23:51:27.798: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:51:27.801: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:51:27.801: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:51:28.805: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:51:28.809: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:51:28.809: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:51:29.806: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:51:29.810: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Feb 2 23:51:29.810: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Update daemon pods image. 02/02/23 23:51:29.823 STEP: Check that daemon pods images are updated. 02/02/23 23:51:29.834 Feb 2 23:51:29.838: INFO: Wrong image for pod: daemon-set-ltkk6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.40, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Feb 2 23:51:29.838: INFO: Wrong image for pod: daemon-set-vl8z4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.40, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Feb 2 23:51:29.842: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:51:30.846: INFO: Wrong image for pod: daemon-set-ltkk6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.40, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Feb 2 23:51:30.850: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:51:31.846: INFO: Wrong image for pod: daemon-set-ltkk6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.40, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Feb 2 23:51:31.851: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:51:32.846: INFO: Pod daemon-set-kjngj is not available Feb 2 23:51:32.846: INFO: Wrong image for pod: daemon-set-ltkk6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.40, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Feb 2 23:51:32.852: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:51:33.851: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:51:34.847: INFO: Pod daemon-set-mnn79 is not available Feb 2 23:51:34.851: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. 02/02/23 23:51:34.851 Feb 2 23:51:34.855: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:51:34.858: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Feb 2 23:51:34.858: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:51:35.863: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:51:35.867: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Feb 2 23:51:35.867: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 STEP: Deleting DaemonSet "daemon-set" 02/02/23 23:51:35.883 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6025, will wait for the garbage collector to delete the pods 02/02/23 23:51:35.883 Feb 2 23:51:35.942: INFO: Deleting DaemonSet.extensions daemon-set took: 4.677349ms Feb 2 23:51:36.042: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.451206ms Feb 2 23:51:38.646: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:51:38.646: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Feb 2 23:51:38.649: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1433172"},"items":null} Feb 2 23:51:38.652: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1433172"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Feb 2 23:51:38.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6025" for this suite. 02/02/23 23:51:38.667 {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","completed":6,"skipped":2247,"failed":0} ------------------------------ • [SLOW TEST] [10.917 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/apps/daemon_set.go:373 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:51:27.755 Feb 2 23:51:27.756: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 02/02/23 23:51:27.757 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:51:27.767 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:51:27.771 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/apps/daemon_set.go:373 Feb 2 23:51:27.790: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. 02/02/23 23:51:27.795 Feb 2 23:51:27.798: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:51:27.801: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:51:27.801: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:51:28.805: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:51:28.809: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:51:28.809: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:51:29.806: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:51:29.810: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Feb 2 23:51:29.810: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Update daemon pods image. 02/02/23 23:51:29.823 STEP: Check that daemon pods images are updated. 02/02/23 23:51:29.834 Feb 2 23:51:29.838: INFO: Wrong image for pod: daemon-set-ltkk6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.40, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Feb 2 23:51:29.838: INFO: Wrong image for pod: daemon-set-vl8z4. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.40, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Feb 2 23:51:29.842: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:51:30.846: INFO: Wrong image for pod: daemon-set-ltkk6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.40, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Feb 2 23:51:30.850: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:51:31.846: INFO: Wrong image for pod: daemon-set-ltkk6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.40, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Feb 2 23:51:31.851: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:51:32.846: INFO: Pod daemon-set-kjngj is not available Feb 2 23:51:32.846: INFO: Wrong image for pod: daemon-set-ltkk6. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.40, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Feb 2 23:51:32.852: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:51:33.851: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:51:34.847: INFO: Pod daemon-set-mnn79 is not available Feb 2 23:51:34.851: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. 02/02/23 23:51:34.851 Feb 2 23:51:34.855: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:51:34.858: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Feb 2 23:51:34.858: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:51:35.863: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:51:35.867: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Feb 2 23:51:35.867: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 STEP: Deleting DaemonSet "daemon-set" 02/02/23 23:51:35.883 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6025, will wait for the garbage collector to delete the pods 02/02/23 23:51:35.883 Feb 2 23:51:35.942: INFO: Deleting DaemonSet.extensions daemon-set took: 4.677349ms Feb 2 23:51:36.042: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.451206ms Feb 2 23:51:38.646: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:51:38.646: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Feb 2 23:51:38.649: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1433172"},"items":null} Feb 2 23:51:38.652: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1433172"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Feb 2 23:51:38.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6025" for this suite. 02/02/23 23:51:38.667 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:242 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:51:38.705 Feb 2 23:51:38.705: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 02/02/23 23:51:38.706 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:51:38.717 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:51:38.721 [It] should ensure that all pods are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:242 STEP: Creating a test namespace 02/02/23 23:51:38.725 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:51:38.735 STEP: Creating a pod in the namespace 02/02/23 23:51:38.739 STEP: Waiting for the pod to have running status 02/02/23 23:51:38.745 Feb 2 23:51:38.746: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "nsdeletetest-3355" to be "running" Feb 2 23:51:38.748: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.870047ms Feb 2 23:51:40.754: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008049905s Feb 2 23:51:40.754: INFO: Pod "test-pod" satisfied condition "running" STEP: Deleting the namespace 02/02/23 23:51:40.754 STEP: Waiting for the namespace to be removed. 02/02/23 23:51:40.759 STEP: Recreating the namespace 02/02/23 23:51:51.764 STEP: Verifying there are no pods in the namespace 02/02/23 23:51:51.775 [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 Feb 2 23:51:51.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5746" for this suite. 02/02/23 23:51:51.782 STEP: Destroying namespace "nsdeletetest-3355" for this suite. 02/02/23 23:51:51.786 Feb 2 23:51:51.789: INFO: Namespace nsdeletetest-3355 was already deleted STEP: Destroying namespace "nsdeletetest-8040" for this suite. 02/02/23 23:51:51.789 {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","completed":7,"skipped":2728,"failed":0} ------------------------------ • [SLOW TEST] [13.088 seconds] [sig-api-machinery] Namespaces [Serial] test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:242 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:51:38.705 Feb 2 23:51:38.705: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 02/02/23 23:51:38.706 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:51:38.717 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:51:38.721 [It] should ensure that all pods are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:242 STEP: Creating a test namespace 02/02/23 23:51:38.725 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:51:38.735 STEP: Creating a pod in the namespace 02/02/23 23:51:38.739 STEP: Waiting for the pod to have running status 02/02/23 23:51:38.745 Feb 2 23:51:38.746: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "nsdeletetest-3355" to be "running" Feb 2 23:51:38.748: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.870047ms Feb 2 23:51:40.754: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008049905s Feb 2 23:51:40.754: INFO: Pod "test-pod" satisfied condition "running" STEP: Deleting the namespace 02/02/23 23:51:40.754 STEP: Waiting for the namespace to be removed. 02/02/23 23:51:40.759 STEP: Recreating the namespace 02/02/23 23:51:51.764 STEP: Verifying there are no pods in the namespace 02/02/23 23:51:51.775 [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 Feb 2 23:51:51.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5746" for this suite. 02/02/23 23:51:51.782 STEP: Destroying namespace "nsdeletetest-3355" for this suite. 02/02/23 23:51:51.786 Feb 2 23:51:51.789: INFO: Namespace nsdeletetest-3355 was already deleted STEP: Destroying namespace "nsdeletetest-8040" for this suite. 02/02/23 23:51:51.789 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] test/e2e/scheduling/predicates.go:438 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:51:51.808 Feb 2 23:51:51.808: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/02/23 23:51:51.809 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:51:51.82 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:51:51.823 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 2 23:51:51.827: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 2 23:51:51.835: INFO: Waiting for terminating namespaces to be deleted... Feb 2 23:51:51.837: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 2 23:51:51.843: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 2 23:51:51.843: INFO: Container loopdev ready: true, restart count 0 Feb 2 23:51:51.843: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:51:51.843: INFO: Container kindnet-cni ready: true, restart count 0 Feb 2 23:51:51.843: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:51:51.843: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 23:51:51.843: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 2 23:51:51.849: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 2 23:51:51.849: INFO: Container loopdev ready: true, restart count 0 Feb 2 23:51:51.849: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:51:51.849: INFO: Container kindnet-cni ready: true, restart count 0 Feb 2 23:51:51.849: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:51:51.849: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] test/e2e/scheduling/predicates.go:438 STEP: Trying to schedule Pod with nonempty NodeSelector. 02/02/23 23:51:51.849 STEP: Considering event: Type = [Warning], Name = [restricted-pod.174026f8e7f09038], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.] 02/02/23 23:51:51.87 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 2 23:51:52.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6780" for this suite. 02/02/23 23:51:52.875 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","completed":8,"skipped":2955,"failed":0} ------------------------------ • [1.072 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if not matching [Conformance] test/e2e/scheduling/predicates.go:438 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:51:51.808 Feb 2 23:51:51.808: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/02/23 23:51:51.809 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:51:51.82 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:51:51.823 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 2 23:51:51.827: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 2 23:51:51.835: INFO: Waiting for terminating namespaces to be deleted... Feb 2 23:51:51.837: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 2 23:51:51.843: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 2 23:51:51.843: INFO: Container loopdev ready: true, restart count 0 Feb 2 23:51:51.843: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:51:51.843: INFO: Container kindnet-cni ready: true, restart count 0 Feb 2 23:51:51.843: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:51:51.843: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 23:51:51.843: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 2 23:51:51.849: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 2 23:51:51.849: INFO: Container loopdev ready: true, restart count 0 Feb 2 23:51:51.849: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:51:51.849: INFO: Container kindnet-cni ready: true, restart count 0 Feb 2 23:51:51.849: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:51:51.849: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] test/e2e/scheduling/predicates.go:438 STEP: Trying to schedule Pod with nonempty NodeSelector. 02/02/23 23:51:51.849 STEP: Considering event: Type = [Warning], Name = [restricted-pod.174026f8e7f09038], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 3 node(s) didn't match Pod's node affinity/selector. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.] 02/02/23 23:51:51.87 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 2 23:51:52.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6780" for this suite. 02/02/23 23:51:52.875 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] test/e2e/apps/daemon_set.go:193 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:51:52.885 Feb 2 23:51:52.885: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 02/02/23 23:51:52.887 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:51:52.897 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:51:52.9 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should run and stop complex daemon [Conformance] test/e2e/apps/daemon_set.go:193 Feb 2 23:51:52.918: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. 02/02/23 23:51:52.923 Feb 2 23:51:52.926: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:51:52.926: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set STEP: Change node label to blue, check that daemon pod is launched. 02/02/23 23:51:52.926 Feb 2 23:51:52.943: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:51:52.943: INFO: Node v125-worker2 is running 0 daemon pod, expected 1 Feb 2 23:51:53.947: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:51:53.947: INFO: Node v125-worker2 is running 0 daemon pod, expected 1 Feb 2 23:51:54.947: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Feb 2 23:51:54.948: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set STEP: Update the node label to green, and wait for daemons to be unscheduled 02/02/23 23:51:54.95 Feb 2 23:51:54.965: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Feb 2 23:51:54.965: INFO: Number of running nodes: 0, number of available pods: 1 in daemonset daemon-set Feb 2 23:51:55.968: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:51:55.968: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate 02/02/23 23:51:55.968 Feb 2 23:51:55.975: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:51:55.975: INFO: Node v125-worker2 is running 0 daemon pod, expected 1 Feb 2 23:51:56.979: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:51:56.979: INFO: Node v125-worker2 is running 0 daemon pod, expected 1 Feb 2 23:51:57.979: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:51:57.979: INFO: Node v125-worker2 is running 0 daemon pod, expected 1 Feb 2 23:51:58.980: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Feb 2 23:51:58.980: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 STEP: Deleting DaemonSet "daemon-set" 02/02/23 23:51:58.986 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4709, will wait for the garbage collector to delete the pods 02/02/23 23:51:58.986 Feb 2 23:51:59.045: INFO: Deleting DaemonSet.extensions daemon-set took: 4.684568ms Feb 2 23:51:59.146: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.543593ms Feb 2 23:52:01.650: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:52:01.650: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Feb 2 23:52:01.652: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1433339"},"items":null} Feb 2 23:52:01.655: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1433339"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Feb 2 23:52:01.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4709" for this suite. 02/02/23 23:52:01.678 {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","completed":9,"skipped":3033,"failed":0} ------------------------------ • [SLOW TEST] [8.797 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] test/e2e/apps/daemon_set.go:193 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:51:52.885 Feb 2 23:51:52.885: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 02/02/23 23:51:52.887 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:51:52.897 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:51:52.9 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should run and stop complex daemon [Conformance] test/e2e/apps/daemon_set.go:193 Feb 2 23:51:52.918: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. 02/02/23 23:51:52.923 Feb 2 23:51:52.926: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:51:52.926: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set STEP: Change node label to blue, check that daemon pod is launched. 02/02/23 23:51:52.926 Feb 2 23:51:52.943: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:51:52.943: INFO: Node v125-worker2 is running 0 daemon pod, expected 1 Feb 2 23:51:53.947: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:51:53.947: INFO: Node v125-worker2 is running 0 daemon pod, expected 1 Feb 2 23:51:54.947: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Feb 2 23:51:54.948: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set STEP: Update the node label to green, and wait for daemons to be unscheduled 02/02/23 23:51:54.95 Feb 2 23:51:54.965: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Feb 2 23:51:54.965: INFO: Number of running nodes: 0, number of available pods: 1 in daemonset daemon-set Feb 2 23:51:55.968: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:51:55.968: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate 02/02/23 23:51:55.968 Feb 2 23:51:55.975: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:51:55.975: INFO: Node v125-worker2 is running 0 daemon pod, expected 1 Feb 2 23:51:56.979: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:51:56.979: INFO: Node v125-worker2 is running 0 daemon pod, expected 1 Feb 2 23:51:57.979: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:51:57.979: INFO: Node v125-worker2 is running 0 daemon pod, expected 1 Feb 2 23:51:58.980: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Feb 2 23:51:58.980: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 STEP: Deleting DaemonSet "daemon-set" 02/02/23 23:51:58.986 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4709, will wait for the garbage collector to delete the pods 02/02/23 23:51:58.986 Feb 2 23:51:59.045: INFO: Deleting DaemonSet.extensions daemon-set took: 4.684568ms Feb 2 23:51:59.146: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.543593ms Feb 2 23:52:01.650: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:52:01.650: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Feb 2 23:52:01.652: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1433339"},"items":null} Feb 2 23:52:01.655: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1433339"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Feb 2 23:52:01.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4709" for this suite. 02/02/23 23:52:01.678 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:250 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:52:01.697 Feb 2 23:52:01.697: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 02/02/23 23:52:01.699 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:52:01.708 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:52:01.712 [It] should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:250 STEP: Creating a test namespace 02/02/23 23:52:01.715 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:52:01.724 STEP: Creating a service in the namespace 02/02/23 23:52:01.727 STEP: Deleting the namespace 02/02/23 23:52:01.734 STEP: Waiting for the namespace to be removed. 02/02/23 23:52:01.739 STEP: Recreating the namespace 02/02/23 23:52:07.742 STEP: Verifying there is no service in the namespace 02/02/23 23:52:07.753 [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 Feb 2 23:52:07.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9061" for this suite. 02/02/23 23:52:07.76 STEP: Destroying namespace "nsdeletetest-3192" for this suite. 02/02/23 23:52:07.764 Feb 2 23:52:07.767: INFO: Namespace nsdeletetest-3192 was already deleted STEP: Destroying namespace "nsdeletetest-1509" for this suite. 02/02/23 23:52:07.767 {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","completed":10,"skipped":3269,"failed":0} ------------------------------ • [SLOW TEST] [6.074 seconds] [sig-api-machinery] Namespaces [Serial] test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:250 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:52:01.697 Feb 2 23:52:01.697: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 02/02/23 23:52:01.699 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:52:01.708 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:52:01.712 [It] should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/apimachinery/namespace.go:250 STEP: Creating a test namespace 02/02/23 23:52:01.715 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:52:01.724 STEP: Creating a service in the namespace 02/02/23 23:52:01.727 STEP: Deleting the namespace 02/02/23 23:52:01.734 STEP: Waiting for the namespace to be removed. 02/02/23 23:52:01.739 STEP: Recreating the namespace 02/02/23 23:52:07.742 STEP: Verifying there is no service in the namespace 02/02/23 23:52:07.753 [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 Feb 2 23:52:07.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9061" for this suite. 02/02/23 23:52:07.76 STEP: Destroying namespace "nsdeletetest-3192" for this suite. 02/02/23 23:52:07.764 Feb 2 23:52:07.767: INFO: Namespace nsdeletetest-3192 was already deleted STEP: Destroying namespace "nsdeletetest-1509" for this suite. 02/02/23 23:52:07.767 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] test/e2e/apps/daemon_set.go:431 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:52:07.78 Feb 2 23:52:07.780: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 02/02/23 23:52:07.782 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:52:07.792 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:52:07.795 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should rollback without unnecessary restarts [Conformance] test/e2e/apps/daemon_set.go:431 Feb 2 23:52:07.817: INFO: Create a RollingUpdate DaemonSet Feb 2 23:52:07.822: INFO: Check that daemon pods launch on every node of the cluster Feb 2 23:52:07.826: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:52:07.828: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:52:07.828: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:52:08.833: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:52:08.837: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:52:08.837: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:52:09.833: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:52:09.837: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Feb 2 23:52:09.837: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set Feb 2 23:52:09.837: INFO: Update the DaemonSet to trigger a rollout Feb 2 23:52:09.845: INFO: Updating DaemonSet daemon-set Feb 2 23:52:12.861: INFO: Roll back the DaemonSet before rollout is complete Feb 2 23:52:12.869: INFO: Updating DaemonSet daemon-set Feb 2 23:52:12.869: INFO: Make sure DaemonSet rollback is complete Feb 2 23:52:12.872: INFO: Wrong image for pod: daemon-set-q2nd9. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2, got: foo:non-existent. Feb 2 23:52:12.872: INFO: Pod daemon-set-q2nd9 is not available Feb 2 23:52:12.876: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:52:13.884: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:52:14.882: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:52:15.880: INFO: Pod daemon-set-p9749 is not available Feb 2 23:52:15.884: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 STEP: Deleting DaemonSet "daemon-set" 02/02/23 23:52:15.89 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3323, will wait for the garbage collector to delete the pods 02/02/23 23:52:15.89 Feb 2 23:52:15.948: INFO: Deleting DaemonSet.extensions daemon-set took: 4.550896ms Feb 2 23:52:16.049: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.860938ms Feb 2 23:52:17.753: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:52:17.753: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Feb 2 23:52:17.756: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1433494"},"items":null} Feb 2 23:52:17.759: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1433494"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Feb 2 23:52:17.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3323" for this suite. 02/02/23 23:52:17.774 {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","completed":11,"skipped":3351,"failed":0} ------------------------------ • [SLOW TEST] [9.998 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] test/e2e/apps/daemon_set.go:431 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:52:07.78 Feb 2 23:52:07.780: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 02/02/23 23:52:07.782 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:52:07.792 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:52:07.795 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should rollback without unnecessary restarts [Conformance] test/e2e/apps/daemon_set.go:431 Feb 2 23:52:07.817: INFO: Create a RollingUpdate DaemonSet Feb 2 23:52:07.822: INFO: Check that daemon pods launch on every node of the cluster Feb 2 23:52:07.826: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:52:07.828: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:52:07.828: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:52:08.833: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:52:08.837: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:52:08.837: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:52:09.833: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:52:09.837: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Feb 2 23:52:09.837: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set Feb 2 23:52:09.837: INFO: Update the DaemonSet to trigger a rollout Feb 2 23:52:09.845: INFO: Updating DaemonSet daemon-set Feb 2 23:52:12.861: INFO: Roll back the DaemonSet before rollout is complete Feb 2 23:52:12.869: INFO: Updating DaemonSet daemon-set Feb 2 23:52:12.869: INFO: Make sure DaemonSet rollback is complete Feb 2 23:52:12.872: INFO: Wrong image for pod: daemon-set-q2nd9. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2, got: foo:non-existent. Feb 2 23:52:12.872: INFO: Pod daemon-set-q2nd9 is not available Feb 2 23:52:12.876: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:52:13.884: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:52:14.882: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:52:15.880: INFO: Pod daemon-set-p9749 is not available Feb 2 23:52:15.884: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 STEP: Deleting DaemonSet "daemon-set" 02/02/23 23:52:15.89 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3323, will wait for the garbage collector to delete the pods 02/02/23 23:52:15.89 Feb 2 23:52:15.948: INFO: Deleting DaemonSet.extensions daemon-set took: 4.550896ms Feb 2 23:52:16.049: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.860938ms Feb 2 23:52:17.753: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:52:17.753: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Feb 2 23:52:17.756: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1433494"},"items":null} Feb 2 23:52:17.759: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1433494"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Feb 2 23:52:17.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3323" for this suite. 02/02/23 23:52:17.774 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance] test/e2e/apimachinery/namespace.go:298 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:52:17.794 Feb 2 23:52:17.794: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 02/02/23 23:52:17.796 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:52:17.807 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:52:17.811 [It] should apply changes to a namespace status [Conformance] test/e2e/apimachinery/namespace.go:298 STEP: Read namespace status 02/02/23 23:52:17.814 Feb 2 23:52:17.818: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} STEP: Patch namespace status 02/02/23 23:52:17.818 Feb 2 23:52:17.823: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} STEP: Update namespace status 02/02/23 23:52:17.824 Feb 2 23:52:17.832: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 Feb 2 23:52:17.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8982" for this suite. 02/02/23 23:52:17.837 {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should apply changes to a namespace status [Conformance]","completed":12,"skipped":3466,"failed":0} ------------------------------ • [0.048 seconds] [sig-api-machinery] Namespaces [Serial] test/e2e/apimachinery/framework.go:23 should apply changes to a namespace status [Conformance] test/e2e/apimachinery/namespace.go:298 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:52:17.794 Feb 2 23:52:17.794: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 02/02/23 23:52:17.796 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:52:17.807 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:52:17.811 [It] should apply changes to a namespace status [Conformance] test/e2e/apimachinery/namespace.go:298 STEP: Read namespace status 02/02/23 23:52:17.814 Feb 2 23:52:17.818: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} STEP: Patch namespace status 02/02/23 23:52:17.818 Feb 2 23:52:17.823: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} STEP: Update namespace status 02/02/23 23:52:17.824 Feb 2 23:52:17.832: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 Feb 2 23:52:17.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8982" for this suite. 02/02/23 23:52:17.837 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] test/e2e/scheduling/preemption.go:543 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:52:17.874 Feb 2 23:52:17.875: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 02/02/23 23:52:17.876 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:52:17.886 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:52:17.89 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Feb 2 23:52:17.904: INFO: Waiting up to 1m0s for all nodes to be ready Feb 2 23:53:17.933: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:53:17.936 Feb 2 23:53:17.936: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption-path 02/02/23 23:53:17.937 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:53:17.947 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:53:17.951 [BeforeEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:496 STEP: Finding an available node 02/02/23 23:53:17.955 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/02/23 23:53:17.955 Feb 2 23:53:17.962: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-path-2202" to be "running" Feb 2 23:53:17.965: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.602355ms Feb 2 23:53:19.969: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.006784398s Feb 2 23:53:19.969: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/02/23 23:53:19.972 Feb 2 23:53:19.980: INFO: found a healthy node: v125-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] test/e2e/scheduling/preemption.go:543 Feb 2 23:53:28.046: INFO: pods created so far: [1 1 1] Feb 2 23:53:28.046: INFO: length of pods created so far: 3 Feb 2 23:53:30.055: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath test/e2e/framework/framework.go:187 Feb 2 23:53:37.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-2202" for this suite. 02/02/23 23:53:37.063 [AfterEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:470 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Feb 2 23:53:37.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5366" for this suite. 02/02/23 23:53:37.1 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","completed":13,"skipped":3930,"failed":0} ------------------------------ • [SLOW TEST] [79.266 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 PreemptionExecutionPath test/e2e/scheduling/preemption.go:458 runs ReplicaSets to verify preemption running path [Conformance] test/e2e/scheduling/preemption.go:543 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:52:17.874 Feb 2 23:52:17.875: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 02/02/23 23:52:17.876 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:52:17.886 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:52:17.89 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Feb 2 23:52:17.904: INFO: Waiting up to 1m0s for all nodes to be ready Feb 2 23:53:17.933: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:53:17.936 Feb 2 23:53:17.936: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption-path 02/02/23 23:53:17.937 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:53:17.947 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:53:17.951 [BeforeEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:496 STEP: Finding an available node 02/02/23 23:53:17.955 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/02/23 23:53:17.955 Feb 2 23:53:17.962: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-path-2202" to be "running" Feb 2 23:53:17.965: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.602355ms Feb 2 23:53:19.969: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.006784398s Feb 2 23:53:19.969: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/02/23 23:53:19.972 Feb 2 23:53:19.980: INFO: found a healthy node: v125-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] test/e2e/scheduling/preemption.go:543 Feb 2 23:53:28.046: INFO: pods created so far: [1 1 1] Feb 2 23:53:28.046: INFO: length of pods created so far: 3 Feb 2 23:53:30.055: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath test/e2e/framework/framework.go:187 Feb 2 23:53:37.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-2202" for this suite. 02/02/23 23:53:37.063 [AfterEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:470 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Feb 2 23:53:37.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5366" for this suite. 02/02/23 23:53:37.1 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] test/e2e/apimachinery/namespace.go:267 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:53:37.23 Feb 2 23:53:37.230: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 02/02/23 23:53:37.231 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:53:37.241 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:53:37.245 [It] should patch a Namespace [Conformance] test/e2e/apimachinery/namespace.go:267 STEP: creating a Namespace 02/02/23 23:53:37.248 STEP: patching the Namespace 02/02/23 23:53:37.258 STEP: get the Namespace and ensuring it has the label 02/02/23 23:53:37.262 [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 Feb 2 23:53:37.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6068" for this suite. 02/02/23 23:53:37.269 STEP: Destroying namespace "nspatchtest-bb2c1b8d-5078-45a3-9426-1bc5f36a4f07-4999" for this suite. 02/02/23 23:53:37.273 {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","completed":14,"skipped":5582,"failed":0} ------------------------------ • [0.047 seconds] [sig-api-machinery] Namespaces [Serial] test/e2e/apimachinery/framework.go:23 should patch a Namespace [Conformance] test/e2e/apimachinery/namespace.go:267 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:53:37.23 Feb 2 23:53:37.230: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename namespaces 02/02/23 23:53:37.231 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:53:37.241 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:53:37.245 [It] should patch a Namespace [Conformance] test/e2e/apimachinery/namespace.go:267 STEP: creating a Namespace 02/02/23 23:53:37.248 STEP: patching the Namespace 02/02/23 23:53:37.258 STEP: get the Namespace and ensuring it has the label 02/02/23 23:53:37.262 [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 Feb 2 23:53:37.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6068" for this suite. 02/02/23 23:53:37.269 STEP: Destroying namespace "nspatchtest-bb2c1b8d-5078-45a3-9426-1bc5f36a4f07-4999" for this suite. 02/02/23 23:53:37.273 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/scheduling/predicates.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:53:37.29 Feb 2 23:53:37.290: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/02/23 23:53:37.292 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:53:37.301 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:53:37.306 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 2 23:53:37.310: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 2 23:53:37.317: INFO: Waiting for terminating namespaces to be deleted... Feb 2 23:53:37.321: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 2 23:53:37.327: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 2 23:53:37.327: INFO: Container loopdev ready: true, restart count 0 Feb 2 23:53:37.327: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:53:37.327: INFO: Container kindnet-cni ready: true, restart count 0 Feb 2 23:53:37.327: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:53:37.327: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 23:53:37.327: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 2 23:53:37.333: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 2 23:53:37.333: INFO: Container loopdev ready: true, restart count 0 Feb 2 23:53:37.333: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:53:37.333: INFO: Container kindnet-cni ready: true, restart count 0 Feb 2 23:53:37.333: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:53:37.333: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 23:53:37.333: INFO: pod4 from sched-preemption-path-2202 started at 2023-02-02 23:53:29 +0000 UTC (1 container statuses recorded) Feb 2 23:53:37.333: INFO: Container pod4 ready: true, restart count 0 Feb 2 23:53:37.333: INFO: rs-pod3-hfw8v from sched-preemption-path-2202 started at 2023-02-02 23:53:26 +0000 UTC (1 container statuses recorded) Feb 2 23:53:37.333: INFO: Container pod3 ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/scheduling/predicates.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/02/23 23:53:37.334 Feb 2 23:53:37.341: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-6156" to be "running" Feb 2 23:53:37.344: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.741703ms Feb 2 23:53:39.348: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007142569s Feb 2 23:53:39.348: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/02/23 23:53:39.351 STEP: Trying to apply a random label on the found node. 02/02/23 23:53:39.362 STEP: verifying the node has the label kubernetes.io/e2e-7d208e41-2273-4414-b1ea-01230bf2d8e7 95 02/02/23 23:53:39.372 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled 02/02/23 23:53:39.376 Feb 2 23:53:39.380: INFO: Waiting up to 5m0s for pod "pod4" in namespace "sched-pred-6156" to be "not pending" Feb 2 23:53:39.383: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.798057ms Feb 2 23:53:41.387: INFO: Pod "pod4": Phase="Running", Reason="", readiness=true. Elapsed: 2.007170513s Feb 2 23:53:41.387: INFO: Pod "pod4" satisfied condition "not pending" STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.20.0.13 on the node which pod4 resides and expect not scheduled 02/02/23 23:53:41.387 Feb 2 23:53:41.393: INFO: Waiting up to 5m0s for pod "pod5" in namespace "sched-pred-6156" to be "not pending" Feb 2 23:53:41.396: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.160065ms Feb 2 23:53:43.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008138814s Feb 2 23:53:45.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008025577s Feb 2 23:53:47.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008897399s Feb 2 23:53:49.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.007547144s Feb 2 23:53:51.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.008207014s Feb 2 23:53:53.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.009355541s Feb 2 23:53:55.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.006973791s Feb 2 23:53:57.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.009108195s Feb 2 23:53:59.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.007999589s Feb 2 23:54:01.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.008773109s Feb 2 23:54:03.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.008615538s Feb 2 23:54:05.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.007662619s Feb 2 23:54:07.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.008651214s Feb 2 23:54:09.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.00790854s Feb 2 23:54:11.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.008635356s Feb 2 23:54:13.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.008373448s Feb 2 23:54:15.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.008525464s Feb 2 23:54:17.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.008739046s Feb 2 23:54:19.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.008158046s Feb 2 23:54:21.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.007897108s Feb 2 23:54:23.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.007802995s Feb 2 23:54:25.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.007846625s Feb 2 23:54:27.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.008844483s Feb 2 23:54:29.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.007265901s Feb 2 23:54:31.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.008536826s Feb 2 23:54:33.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.0088305s Feb 2 23:54:35.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.008120635s Feb 2 23:54:37.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.00880193s Feb 2 23:54:39.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.007116716s Feb 2 23:54:41.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.008954775s Feb 2 23:54:43.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.008973472s Feb 2 23:54:45.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.00837152s Feb 2 23:54:47.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.009163624s Feb 2 23:54:49.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.007852934s Feb 2 23:54:51.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.00858231s Feb 2 23:54:53.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.008867648s Feb 2 23:54:55.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.007824172s Feb 2 23:54:57.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.008551128s Feb 2 23:54:59.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.008288632s Feb 2 23:55:01.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.009206838s Feb 2 23:55:03.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.009306533s Feb 2 23:55:05.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.008419083s Feb 2 23:55:07.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.008295943s Feb 2 23:55:09.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.00779166s Feb 2 23:55:11.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.008697935s Feb 2 23:55:13.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.008516374s Feb 2 23:55:15.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.007588284s Feb 2 23:55:17.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.008564874s Feb 2 23:55:19.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.007879477s Feb 2 23:55:21.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.008631205s Feb 2 23:55:23.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.008278696s Feb 2 23:55:25.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.008195984s Feb 2 23:55:27.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.009324171s Feb 2 23:55:29.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.007303408s Feb 2 23:55:31.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.009143917s Feb 2 23:55:33.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.008984795s Feb 2 23:55:35.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.008240389s Feb 2 23:55:37.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.007394924s Feb 2 23:55:39.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.007843146s Feb 2 23:55:41.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.008835576s Feb 2 23:55:43.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.008636189s Feb 2 23:55:45.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.007771379s Feb 2 23:55:47.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.008481555s Feb 2 23:55:49.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.008041028s Feb 2 23:55:51.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.007899379s Feb 2 23:55:53.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.008723639s Feb 2 23:55:55.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.007637192s Feb 2 23:55:57.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.00828271s Feb 2 23:55:59.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.008031699s Feb 2 23:56:01.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.008949612s Feb 2 23:56:03.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.008876385s Feb 2 23:56:05.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.008388636s Feb 2 23:56:07.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.009356741s Feb 2 23:56:09.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.008067677s Feb 2 23:56:11.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.008977903s Feb 2 23:56:13.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.00911019s Feb 2 23:56:15.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.008018144s Feb 2 23:56:17.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.008856697s Feb 2 23:56:19.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.007143839s Feb 2 23:56:21.403: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.010478978s Feb 2 23:56:23.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.008450251s Feb 2 23:56:25.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.007605312s Feb 2 23:56:27.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.008231903s Feb 2 23:56:29.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.008534531s Feb 2 23:56:31.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.008346129s Feb 2 23:56:33.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.007928269s Feb 2 23:56:35.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.007896024s Feb 2 23:56:37.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.0083785s Feb 2 23:56:39.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.007534327s Feb 2 23:56:41.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.008234421s Feb 2 23:56:43.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.008730285s Feb 2 23:56:45.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.008110686s Feb 2 23:56:47.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.008781961s Feb 2 23:56:49.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.007046174s Feb 2 23:56:51.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.008469738s Feb 2 23:56:53.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.008186712s Feb 2 23:56:55.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.007983707s Feb 2 23:56:57.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.008921748s Feb 2 23:56:59.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.007034435s Feb 2 23:57:01.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.008529026s Feb 2 23:57:03.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.00816463s Feb 2 23:57:05.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.00807748s Feb 2 23:57:07.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.008789376s Feb 2 23:57:09.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.008127443s Feb 2 23:57:11.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.00911164s Feb 2 23:57:13.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.009011854s Feb 2 23:57:15.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.007708492s Feb 2 23:57:17.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.008715062s Feb 2 23:57:19.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.008175422s Feb 2 23:57:21.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.009074597s Feb 2 23:57:23.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.00878066s Feb 2 23:57:25.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.00769912s Feb 2 23:57:27.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.007861221s Feb 2 23:57:29.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.008064628s Feb 2 23:57:31.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.00870637s Feb 2 23:57:33.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.007194915s Feb 2 23:57:35.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.007312327s Feb 2 23:57:37.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.009199301s Feb 2 23:57:39.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.007682671s Feb 2 23:57:41.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.008183431s Feb 2 23:57:43.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.00869082s Feb 2 23:57:45.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.007443204s Feb 2 23:57:47.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.008026147s Feb 2 23:57:49.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.007106969s Feb 2 23:57:51.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.008563701s Feb 2 23:57:53.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.00796591s Feb 2 23:57:55.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.007332786s Feb 2 23:57:57.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.00875569s Feb 2 23:57:59.399: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.006681665s Feb 2 23:58:01.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.008014124s Feb 2 23:58:03.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.008418128s Feb 2 23:58:05.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.007854386s Feb 2 23:58:07.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.008300651s Feb 2 23:58:09.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.007573042s Feb 2 23:58:11.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.00786431s Feb 2 23:58:13.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.008268073s Feb 2 23:58:15.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.007884434s Feb 2 23:58:17.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.00875697s Feb 2 23:58:19.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.007042605s Feb 2 23:58:21.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.008545648s Feb 2 23:58:23.403: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.010522359s Feb 2 23:58:25.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.008316979s Feb 2 23:58:27.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.009367308s Feb 2 23:58:29.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.007729246s Feb 2 23:58:31.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.008453914s Feb 2 23:58:33.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.008340856s Feb 2 23:58:35.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.008412223s Feb 2 23:58:37.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.008526911s Feb 2 23:58:39.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.007924709s Feb 2 23:58:41.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.009020199s Feb 2 23:58:41.405: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.012126484s STEP: removing the label kubernetes.io/e2e-7d208e41-2273-4414-b1ea-01230bf2d8e7 off the node v125-worker2 02/02/23 23:58:41.405 STEP: verifying the node doesn't have the label kubernetes.io/e2e-7d208e41-2273-4414-b1ea-01230bf2d8e7 02/02/23 23:58:41.419 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 2 23:58:41.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6156" for this suite. 02/02/23 23:58:41.426 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","completed":15,"skipped":5793,"failed":0} ------------------------------ • [SLOW TEST] [304.141 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/scheduling/predicates.go:699 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:53:37.29 Feb 2 23:53:37.290: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/02/23 23:53:37.292 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:53:37.301 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:53:37.306 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 2 23:53:37.310: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 2 23:53:37.317: INFO: Waiting for terminating namespaces to be deleted... Feb 2 23:53:37.321: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 2 23:53:37.327: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 2 23:53:37.327: INFO: Container loopdev ready: true, restart count 0 Feb 2 23:53:37.327: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:53:37.327: INFO: Container kindnet-cni ready: true, restart count 0 Feb 2 23:53:37.327: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:53:37.327: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 23:53:37.327: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 2 23:53:37.333: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 2 23:53:37.333: INFO: Container loopdev ready: true, restart count 0 Feb 2 23:53:37.333: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:53:37.333: INFO: Container kindnet-cni ready: true, restart count 0 Feb 2 23:53:37.333: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 2 23:53:37.333: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 23:53:37.333: INFO: pod4 from sched-preemption-path-2202 started at 2023-02-02 23:53:29 +0000 UTC (1 container statuses recorded) Feb 2 23:53:37.333: INFO: Container pod4 ready: true, restart count 0 Feb 2 23:53:37.333: INFO: rs-pod3-hfw8v from sched-preemption-path-2202 started at 2023-02-02 23:53:26 +0000 UTC (1 container statuses recorded) Feb 2 23:53:37.333: INFO: Container pod3 ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/scheduling/predicates.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/02/23 23:53:37.334 Feb 2 23:53:37.341: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-6156" to be "running" Feb 2 23:53:37.344: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.741703ms Feb 2 23:53:39.348: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007142569s Feb 2 23:53:39.348: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/02/23 23:53:39.351 STEP: Trying to apply a random label on the found node. 02/02/23 23:53:39.362 STEP: verifying the node has the label kubernetes.io/e2e-7d208e41-2273-4414-b1ea-01230bf2d8e7 95 02/02/23 23:53:39.372 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled 02/02/23 23:53:39.376 Feb 2 23:53:39.380: INFO: Waiting up to 5m0s for pod "pod4" in namespace "sched-pred-6156" to be "not pending" Feb 2 23:53:39.383: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.798057ms Feb 2 23:53:41.387: INFO: Pod "pod4": Phase="Running", Reason="", readiness=true. Elapsed: 2.007170513s Feb 2 23:53:41.387: INFO: Pod "pod4" satisfied condition "not pending" STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.20.0.13 on the node which pod4 resides and expect not scheduled 02/02/23 23:53:41.387 Feb 2 23:53:41.393: INFO: Waiting up to 5m0s for pod "pod5" in namespace "sched-pred-6156" to be "not pending" Feb 2 23:53:41.396: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.160065ms Feb 2 23:53:43.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008138814s Feb 2 23:53:45.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008025577s Feb 2 23:53:47.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008897399s Feb 2 23:53:49.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.007547144s Feb 2 23:53:51.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.008207014s Feb 2 23:53:53.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.009355541s Feb 2 23:53:55.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.006973791s Feb 2 23:53:57.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.009108195s Feb 2 23:53:59.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.007999589s Feb 2 23:54:01.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.008773109s Feb 2 23:54:03.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.008615538s Feb 2 23:54:05.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.007662619s Feb 2 23:54:07.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.008651214s Feb 2 23:54:09.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.00790854s Feb 2 23:54:11.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.008635356s Feb 2 23:54:13.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.008373448s Feb 2 23:54:15.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.008525464s Feb 2 23:54:17.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.008739046s Feb 2 23:54:19.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.008158046s Feb 2 23:54:21.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.007897108s Feb 2 23:54:23.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.007802995s Feb 2 23:54:25.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.007846625s Feb 2 23:54:27.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.008844483s Feb 2 23:54:29.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.007265901s Feb 2 23:54:31.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.008536826s Feb 2 23:54:33.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.0088305s Feb 2 23:54:35.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.008120635s Feb 2 23:54:37.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.00880193s Feb 2 23:54:39.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.007116716s Feb 2 23:54:41.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.008954775s Feb 2 23:54:43.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.008973472s Feb 2 23:54:45.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.00837152s Feb 2 23:54:47.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.009163624s Feb 2 23:54:49.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.007852934s Feb 2 23:54:51.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.00858231s Feb 2 23:54:53.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.008867648s Feb 2 23:54:55.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.007824172s Feb 2 23:54:57.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.008551128s Feb 2 23:54:59.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.008288632s Feb 2 23:55:01.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.009206838s Feb 2 23:55:03.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.009306533s Feb 2 23:55:05.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.008419083s Feb 2 23:55:07.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.008295943s Feb 2 23:55:09.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.00779166s Feb 2 23:55:11.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.008697935s Feb 2 23:55:13.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.008516374s Feb 2 23:55:15.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.007588284s Feb 2 23:55:17.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.008564874s Feb 2 23:55:19.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.007879477s Feb 2 23:55:21.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.008631205s Feb 2 23:55:23.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.008278696s Feb 2 23:55:25.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.008195984s Feb 2 23:55:27.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.009324171s Feb 2 23:55:29.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.007303408s Feb 2 23:55:31.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.009143917s Feb 2 23:55:33.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.008984795s Feb 2 23:55:35.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.008240389s Feb 2 23:55:37.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.007394924s Feb 2 23:55:39.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.007843146s Feb 2 23:55:41.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.008835576s Feb 2 23:55:43.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.008636189s Feb 2 23:55:45.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.007771379s Feb 2 23:55:47.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.008481555s Feb 2 23:55:49.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.008041028s Feb 2 23:55:51.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.007899379s Feb 2 23:55:53.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.008723639s Feb 2 23:55:55.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.007637192s Feb 2 23:55:57.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.00828271s Feb 2 23:55:59.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.008031699s Feb 2 23:56:01.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.008949612s Feb 2 23:56:03.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.008876385s Feb 2 23:56:05.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.008388636s Feb 2 23:56:07.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.009356741s Feb 2 23:56:09.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.008067677s Feb 2 23:56:11.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.008977903s Feb 2 23:56:13.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.00911019s Feb 2 23:56:15.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.008018144s Feb 2 23:56:17.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.008856697s Feb 2 23:56:19.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.007143839s Feb 2 23:56:21.403: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.010478978s Feb 2 23:56:23.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.008450251s Feb 2 23:56:25.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.007605312s Feb 2 23:56:27.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.008231903s Feb 2 23:56:29.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.008534531s Feb 2 23:56:31.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.008346129s Feb 2 23:56:33.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.007928269s Feb 2 23:56:35.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.007896024s Feb 2 23:56:37.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.0083785s Feb 2 23:56:39.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.007534327s Feb 2 23:56:41.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.008234421s Feb 2 23:56:43.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.008730285s Feb 2 23:56:45.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.008110686s Feb 2 23:56:47.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.008781961s Feb 2 23:56:49.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.007046174s Feb 2 23:56:51.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.008469738s Feb 2 23:56:53.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.008186712s Feb 2 23:56:55.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.007983707s Feb 2 23:56:57.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.008921748s Feb 2 23:56:59.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.007034435s Feb 2 23:57:01.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.008529026s Feb 2 23:57:03.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.00816463s Feb 2 23:57:05.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.00807748s Feb 2 23:57:07.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.008789376s Feb 2 23:57:09.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.008127443s Feb 2 23:57:11.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.00911164s Feb 2 23:57:13.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.009011854s Feb 2 23:57:15.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.007708492s Feb 2 23:57:17.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.008715062s Feb 2 23:57:19.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.008175422s Feb 2 23:57:21.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.009074597s Feb 2 23:57:23.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.00878066s Feb 2 23:57:25.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.00769912s Feb 2 23:57:27.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.007861221s Feb 2 23:57:29.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.008064628s Feb 2 23:57:31.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.00870637s Feb 2 23:57:33.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.007194915s Feb 2 23:57:35.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.007312327s Feb 2 23:57:37.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.009199301s Feb 2 23:57:39.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.007682671s Feb 2 23:57:41.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.008183431s Feb 2 23:57:43.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.00869082s Feb 2 23:57:45.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.007443204s Feb 2 23:57:47.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.008026147s Feb 2 23:57:49.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.007106969s Feb 2 23:57:51.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.008563701s Feb 2 23:57:53.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.00796591s Feb 2 23:57:55.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.007332786s Feb 2 23:57:57.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.00875569s Feb 2 23:57:59.399: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.006681665s Feb 2 23:58:01.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.008014124s Feb 2 23:58:03.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.008418128s Feb 2 23:58:05.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.007854386s Feb 2 23:58:07.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.008300651s Feb 2 23:58:09.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.007573042s Feb 2 23:58:11.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.00786431s Feb 2 23:58:13.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.008268073s Feb 2 23:58:15.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.007884434s Feb 2 23:58:17.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.00875697s Feb 2 23:58:19.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.007042605s Feb 2 23:58:21.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.008545648s Feb 2 23:58:23.403: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.010522359s Feb 2 23:58:25.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.008316979s Feb 2 23:58:27.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.009367308s Feb 2 23:58:29.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.007729246s Feb 2 23:58:31.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.008453914s Feb 2 23:58:33.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.008340856s Feb 2 23:58:35.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.008412223s Feb 2 23:58:37.401: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.008526911s Feb 2 23:58:39.400: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.007924709s Feb 2 23:58:41.402: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.009020199s Feb 2 23:58:41.405: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.012126484s STEP: removing the label kubernetes.io/e2e-7d208e41-2273-4414-b1ea-01230bf2d8e7 off the node v125-worker2 02/02/23 23:58:41.405 STEP: verifying the node doesn't have the label kubernetes.io/e2e-7d208e41-2273-4414-b1ea-01230bf2d8e7 02/02/23 23:58:41.419 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 2 23:58:41.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6156" for this suite. 02/02/23 23:58:41.426 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance] test/e2e/apps/controller_revision.go:124 [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:58:41.439 Feb 2 23:58:41.439: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename controllerrevisions 02/02/23 23:58:41.44 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:58:41.451 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:58:41.455 [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:93 [It] should manage the lifecycle of a ControllerRevision [Conformance] test/e2e/apps/controller_revision.go:124 STEP: Creating DaemonSet "e2e-dhhm9-daemon-set" 02/02/23 23:58:41.472 STEP: Check that daemon pods launch on every node of the cluster. 02/02/23 23:58:41.477 Feb 2 23:58:41.480: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:41.483: INFO: Number of nodes with available pods controlled by daemonset e2e-dhhm9-daemon-set: 0 Feb 2 23:58:41.483: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:58:42.487: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:42.490: INFO: Number of nodes with available pods controlled by daemonset e2e-dhhm9-daemon-set: 2 Feb 2 23:58:42.490: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset e2e-dhhm9-daemon-set STEP: Confirm DaemonSet "e2e-dhhm9-daemon-set" successfully created with "daemonset-name=e2e-dhhm9-daemon-set" label 02/02/23 23:58:42.493 STEP: Listing all ControllerRevisions with label "daemonset-name=e2e-dhhm9-daemon-set" 02/02/23 23:58:42.5 Feb 2 23:58:42.503: INFO: Located ControllerRevision: "e2e-dhhm9-daemon-set-58cd79b74d" STEP: Patching ControllerRevision "e2e-dhhm9-daemon-set-58cd79b74d" 02/02/23 23:58:42.506 Feb 2 23:58:42.512: INFO: e2e-dhhm9-daemon-set-58cd79b74d has been patched STEP: Create a new ControllerRevision 02/02/23 23:58:42.512 Feb 2 23:58:42.516: INFO: Created ControllerRevision: e2e-dhhm9-daemon-set-78d65f88f4 STEP: Confirm that there are two ControllerRevisions 02/02/23 23:58:42.516 Feb 2 23:58:42.516: INFO: Requesting list of ControllerRevisions to confirm quantity Feb 2 23:58:42.520: INFO: Found 2 ControllerRevisions STEP: Deleting ControllerRevision "e2e-dhhm9-daemon-set-58cd79b74d" 02/02/23 23:58:42.52 STEP: Confirm that there is only one ControllerRevision 02/02/23 23:58:42.524 Feb 2 23:58:42.524: INFO: Requesting list of ControllerRevisions to confirm quantity Feb 2 23:58:42.527: INFO: Found 1 ControllerRevisions STEP: Updating ControllerRevision "e2e-dhhm9-daemon-set-78d65f88f4" 02/02/23 23:58:42.53 Feb 2 23:58:42.538: INFO: e2e-dhhm9-daemon-set-78d65f88f4 has been updated STEP: Generate another ControllerRevision by patching the Daemonset 02/02/23 23:58:42.538 W0202 23:58:42.544791 16 warnings.go:70] unknown field "updateStrategy" STEP: Confirm that there are two ControllerRevisions 02/02/23 23:58:42.544 Feb 2 23:58:42.545: INFO: Requesting list of ControllerRevisions to confirm quantity Feb 2 23:58:43.548: INFO: Requesting list of ControllerRevisions to confirm quantity Feb 2 23:58:43.555: INFO: Found 2 ControllerRevisions STEP: Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-dhhm9-daemon-set-78d65f88f4=updated" 02/02/23 23:58:43.555 STEP: Confirm that there is only one ControllerRevision 02/02/23 23:58:43.561 Feb 2 23:58:43.561: INFO: Requesting list of ControllerRevisions to confirm quantity Feb 2 23:58:43.564: INFO: Found 1 ControllerRevisions Feb 2 23:58:43.567: INFO: ControllerRevision "e2e-dhhm9-daemon-set-7957c6dbbc" has revision 3 [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:58 STEP: Deleting DaemonSet "e2e-dhhm9-daemon-set" 02/02/23 23:58:43.57 STEP: deleting DaemonSet.extensions e2e-dhhm9-daemon-set in namespace controllerrevisions-7004, will wait for the garbage collector to delete the pods 02/02/23 23:58:43.57 Feb 2 23:58:43.629: INFO: Deleting DaemonSet.extensions e2e-dhhm9-daemon-set took: 4.738877ms Feb 2 23:58:43.730: INFO: Terminating DaemonSet.extensions e2e-dhhm9-daemon-set pods took: 100.772943ms Feb 2 23:58:45.534: INFO: Number of nodes with available pods controlled by daemonset e2e-dhhm9-daemon-set: 0 Feb 2 23:58:45.534: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-dhhm9-daemon-set Feb 2 23:58:45.537: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1434357"},"items":null} Feb 2 23:58:45.539: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1434357"},"items":null} [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/framework.go:187 Feb 2 23:58:45.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "controllerrevisions-7004" for this suite. 02/02/23 23:58:45.555 {"msg":"PASSED [sig-apps] ControllerRevision [Serial] should manage the lifecycle of a ControllerRevision [Conformance]","completed":16,"skipped":5888,"failed":0} ------------------------------ • [4.120 seconds] [sig-apps] ControllerRevision [Serial] test/e2e/apps/framework.go:23 should manage the lifecycle of a ControllerRevision [Conformance] test/e2e/apps/controller_revision.go:124 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:58:41.439 Feb 2 23:58:41.439: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename controllerrevisions 02/02/23 23:58:41.44 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:58:41.451 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:58:41.455 [BeforeEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:93 [It] should manage the lifecycle of a ControllerRevision [Conformance] test/e2e/apps/controller_revision.go:124 STEP: Creating DaemonSet "e2e-dhhm9-daemon-set" 02/02/23 23:58:41.472 STEP: Check that daemon pods launch on every node of the cluster. 02/02/23 23:58:41.477 Feb 2 23:58:41.480: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:41.483: INFO: Number of nodes with available pods controlled by daemonset e2e-dhhm9-daemon-set: 0 Feb 2 23:58:41.483: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:58:42.487: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:42.490: INFO: Number of nodes with available pods controlled by daemonset e2e-dhhm9-daemon-set: 2 Feb 2 23:58:42.490: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset e2e-dhhm9-daemon-set STEP: Confirm DaemonSet "e2e-dhhm9-daemon-set" successfully created with "daemonset-name=e2e-dhhm9-daemon-set" label 02/02/23 23:58:42.493 STEP: Listing all ControllerRevisions with label "daemonset-name=e2e-dhhm9-daemon-set" 02/02/23 23:58:42.5 Feb 2 23:58:42.503: INFO: Located ControllerRevision: "e2e-dhhm9-daemon-set-58cd79b74d" STEP: Patching ControllerRevision "e2e-dhhm9-daemon-set-58cd79b74d" 02/02/23 23:58:42.506 Feb 2 23:58:42.512: INFO: e2e-dhhm9-daemon-set-58cd79b74d has been patched STEP: Create a new ControllerRevision 02/02/23 23:58:42.512 Feb 2 23:58:42.516: INFO: Created ControllerRevision: e2e-dhhm9-daemon-set-78d65f88f4 STEP: Confirm that there are two ControllerRevisions 02/02/23 23:58:42.516 Feb 2 23:58:42.516: INFO: Requesting list of ControllerRevisions to confirm quantity Feb 2 23:58:42.520: INFO: Found 2 ControllerRevisions STEP: Deleting ControllerRevision "e2e-dhhm9-daemon-set-58cd79b74d" 02/02/23 23:58:42.52 STEP: Confirm that there is only one ControllerRevision 02/02/23 23:58:42.524 Feb 2 23:58:42.524: INFO: Requesting list of ControllerRevisions to confirm quantity Feb 2 23:58:42.527: INFO: Found 1 ControllerRevisions STEP: Updating ControllerRevision "e2e-dhhm9-daemon-set-78d65f88f4" 02/02/23 23:58:42.53 Feb 2 23:58:42.538: INFO: e2e-dhhm9-daemon-set-78d65f88f4 has been updated STEP: Generate another ControllerRevision by patching the Daemonset 02/02/23 23:58:42.538 W0202 23:58:42.544791 16 warnings.go:70] unknown field "updateStrategy" STEP: Confirm that there are two ControllerRevisions 02/02/23 23:58:42.544 Feb 2 23:58:42.545: INFO: Requesting list of ControllerRevisions to confirm quantity Feb 2 23:58:43.548: INFO: Requesting list of ControllerRevisions to confirm quantity Feb 2 23:58:43.555: INFO: Found 2 ControllerRevisions STEP: Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-dhhm9-daemon-set-78d65f88f4=updated" 02/02/23 23:58:43.555 STEP: Confirm that there is only one ControllerRevision 02/02/23 23:58:43.561 Feb 2 23:58:43.561: INFO: Requesting list of ControllerRevisions to confirm quantity Feb 2 23:58:43.564: INFO: Found 1 ControllerRevisions Feb 2 23:58:43.567: INFO: ControllerRevision "e2e-dhhm9-daemon-set-7957c6dbbc" has revision 3 [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/apps/controller_revision.go:58 STEP: Deleting DaemonSet "e2e-dhhm9-daemon-set" 02/02/23 23:58:43.57 STEP: deleting DaemonSet.extensions e2e-dhhm9-daemon-set in namespace controllerrevisions-7004, will wait for the garbage collector to delete the pods 02/02/23 23:58:43.57 Feb 2 23:58:43.629: INFO: Deleting DaemonSet.extensions e2e-dhhm9-daemon-set took: 4.738877ms Feb 2 23:58:43.730: INFO: Terminating DaemonSet.extensions e2e-dhhm9-daemon-set pods took: 100.772943ms Feb 2 23:58:45.534: INFO: Number of nodes with available pods controlled by daemonset e2e-dhhm9-daemon-set: 0 Feb 2 23:58:45.534: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-dhhm9-daemon-set Feb 2 23:58:45.537: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1434357"},"items":null} Feb 2 23:58:45.539: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1434357"},"items":null} [AfterEach] [sig-apps] ControllerRevision [Serial] test/e2e/framework/framework.go:187 Feb 2 23:58:45.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "controllerrevisions-7004" for this suite. 02/02/23 23:58:45.555 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] test/e2e/apps/daemon_set.go:165 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:58:45.571 Feb 2 23:58:45.571: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 02/02/23 23:58:45.573 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:58:45.584 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:58:45.588 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should run and stop simple daemon [Conformance] test/e2e/apps/daemon_set.go:165 STEP: Creating simple DaemonSet "daemon-set" 02/02/23 23:58:45.607 STEP: Check that daemon pods launch on every node of the cluster. 02/02/23 23:58:45.612 Feb 2 23:58:45.616: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:45.619: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:58:45.619: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:58:46.625: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:46.628: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Feb 2 23:58:46.628: INFO: Node v125-worker2 is running 0 daemon pod, expected 1 Feb 2 23:58:47.624: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:47.628: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Feb 2 23:58:47.628: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Stop a daemon pod, check that the daemon pod is revived. 02/02/23 23:58:47.631 Feb 2 23:58:47.643: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:47.647: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Feb 2 23:58:47.647: INFO: Node v125-worker2 is running 0 daemon pod, expected 1 Feb 2 23:58:48.652: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:48.656: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Feb 2 23:58:48.656: INFO: Node v125-worker2 is running 0 daemon pod, expected 1 Feb 2 23:58:49.652: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:49.655: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Feb 2 23:58:49.655: INFO: Node v125-worker2 is running 0 daemon pod, expected 1 Feb 2 23:58:50.652: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:50.655: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Feb 2 23:58:50.655: INFO: Node v125-worker2 is running 0 daemon pod, expected 1 Feb 2 23:58:51.651: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:51.655: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Feb 2 23:58:51.655: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 STEP: Deleting DaemonSet "daemon-set" 02/02/23 23:58:51.657 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8581, will wait for the garbage collector to delete the pods 02/02/23 23:58:51.657 Feb 2 23:58:51.715: INFO: Deleting DaemonSet.extensions daemon-set took: 3.971577ms Feb 2 23:58:51.815: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.35357ms Feb 2 23:58:54.519: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:58:54.519: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Feb 2 23:58:54.522: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1434465"},"items":null} Feb 2 23:58:54.525: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1434465"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Feb 2 23:58:54.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8581" for this suite. 02/02/23 23:58:54.539 {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","completed":17,"skipped":6036,"failed":0} ------------------------------ • [SLOW TEST] [8.972 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] test/e2e/apps/daemon_set.go:165 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:58:45.571 Feb 2 23:58:45.571: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 02/02/23 23:58:45.573 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:58:45.584 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:58:45.588 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should run and stop simple daemon [Conformance] test/e2e/apps/daemon_set.go:165 STEP: Creating simple DaemonSet "daemon-set" 02/02/23 23:58:45.607 STEP: Check that daemon pods launch on every node of the cluster. 02/02/23 23:58:45.612 Feb 2 23:58:45.616: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:45.619: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:58:45.619: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:58:46.625: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:46.628: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Feb 2 23:58:46.628: INFO: Node v125-worker2 is running 0 daemon pod, expected 1 Feb 2 23:58:47.624: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:47.628: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Feb 2 23:58:47.628: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Stop a daemon pod, check that the daemon pod is revived. 02/02/23 23:58:47.631 Feb 2 23:58:47.643: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:47.647: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Feb 2 23:58:47.647: INFO: Node v125-worker2 is running 0 daemon pod, expected 1 Feb 2 23:58:48.652: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:48.656: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Feb 2 23:58:48.656: INFO: Node v125-worker2 is running 0 daemon pod, expected 1 Feb 2 23:58:49.652: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:49.655: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Feb 2 23:58:49.655: INFO: Node v125-worker2 is running 0 daemon pod, expected 1 Feb 2 23:58:50.652: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:50.655: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Feb 2 23:58:50.655: INFO: Node v125-worker2 is running 0 daemon pod, expected 1 Feb 2 23:58:51.651: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:51.655: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Feb 2 23:58:51.655: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 STEP: Deleting DaemonSet "daemon-set" 02/02/23 23:58:51.657 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8581, will wait for the garbage collector to delete the pods 02/02/23 23:58:51.657 Feb 2 23:58:51.715: INFO: Deleting DaemonSet.extensions daemon-set took: 3.971577ms Feb 2 23:58:51.815: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.35357ms Feb 2 23:58:54.519: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:58:54.519: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Feb 2 23:58:54.522: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1434465"},"items":null} Feb 2 23:58:54.525: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1434465"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Feb 2 23:58:54.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8581" for this suite. 02/02/23 23:58:54.539 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:293 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:58:54.546 Feb 2 23:58:54.546: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 02/02/23 23:58:54.548 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:58:54.561 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:58:54.565 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:293 STEP: Creating a simple DaemonSet "daemon-set" 02/02/23 23:58:54.583 STEP: Check that daemon pods launch on every node of the cluster. 02/02/23 23:58:54.588 Feb 2 23:58:54.592: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:54.595: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:58:54.595: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:58:55.600: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:55.606: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Feb 2 23:58:55.606: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 02/02/23 23:58:55.609 Feb 2 23:58:55.622: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:55.626: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Feb 2 23:58:55.626: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:58:56.630: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:56.634: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Feb 2 23:58:56.634: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Wait for the failed daemon pod to be completely deleted. 02/02/23 23:58:56.634 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 STEP: Deleting DaemonSet "daemon-set" 02/02/23 23:58:56.639 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6525, will wait for the garbage collector to delete the pods 02/02/23 23:58:56.639 Feb 2 23:58:56.696: INFO: Deleting DaemonSet.extensions daemon-set took: 4.182591ms Feb 2 23:58:56.797: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.594822ms Feb 2 23:58:59.600: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:58:59.600: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Feb 2 23:58:59.603: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1434530"},"items":null} Feb 2 23:58:59.606: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1434530"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Feb 2 23:58:59.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6525" for this suite. 02/02/23 23:58:59.62 {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","completed":18,"skipped":6067,"failed":0} ------------------------------ • [SLOW TEST] [5.079 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:293 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:58:54.546 Feb 2 23:58:54.546: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename daemonsets 02/02/23 23:58:54.548 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:58:54.561 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:58:54.565 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should retry creating failed daemon pods [Conformance] test/e2e/apps/daemon_set.go:293 STEP: Creating a simple DaemonSet "daemon-set" 02/02/23 23:58:54.583 STEP: Check that daemon pods launch on every node of the cluster. 02/02/23 23:58:54.588 Feb 2 23:58:54.592: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:54.595: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:58:54.595: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:58:55.600: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:55.606: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Feb 2 23:58:55.606: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 02/02/23 23:58:55.609 Feb 2 23:58:55.622: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:55.626: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Feb 2 23:58:55.626: INFO: Node v125-worker is running 0 daemon pod, expected 1 Feb 2 23:58:56.630: INFO: DaemonSet pods can't tolerate node v125-control-plane with taints [{Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 2 23:58:56.634: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Feb 2 23:58:56.634: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Wait for the failed daemon pod to be completely deleted. 02/02/23 23:58:56.634 [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 STEP: Deleting DaemonSet "daemon-set" 02/02/23 23:58:56.639 STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6525, will wait for the garbage collector to delete the pods 02/02/23 23:58:56.639 Feb 2 23:58:56.696: INFO: Deleting DaemonSet.extensions daemon-set took: 4.182591ms Feb 2 23:58:56.797: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.594822ms Feb 2 23:58:59.600: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Feb 2 23:58:59.600: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Feb 2 23:58:59.603: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1434530"},"items":null} Feb 2 23:58:59.606: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1434530"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 Feb 2 23:58:59.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6525" for this suite. 02/02/23 23:58:59.62 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/scheduling/preemption.go:733 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:58:59.629 Feb 2 23:58:59.629: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 02/02/23 23:58:59.63 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:58:59.641 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:58:59.644 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Feb 2 23:58:59.659: INFO: Waiting up to 1m0s for all nodes to be ready Feb 2 23:59:59.684: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:59:59.687 Feb 2 23:59:59.687: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption-path 02/02/23 23:59:59.689 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:59:59.7 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:59:59.704 [BeforeEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:690 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/scheduling/preemption.go:733 Feb 2 23:59:59.717: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. Feb 2 23:59:59.721: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints test/e2e/framework/framework.go:187 Feb 2 23:59:59.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-5145" for this suite. 02/02/23 23:59:59.74 [AfterEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:706 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Feb 2 23:59:59.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-764" for this suite. 02/02/23 23:59:59.756 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","completed":19,"skipped":6117,"failed":0} ------------------------------ • [SLOW TEST] [60.164 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 PriorityClass endpoints test/e2e/scheduling/preemption.go:683 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/scheduling/preemption.go:733 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:58:59.629 Feb 2 23:58:59.629: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 02/02/23 23:58:59.63 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:58:59.641 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:58:59.644 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Feb 2 23:58:59.659: INFO: Waiting up to 1m0s for all nodes to be ready Feb 2 23:59:59.684: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:59:59.687 Feb 2 23:59:59.687: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption-path 02/02/23 23:59:59.689 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:59:59.7 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:59:59.704 [BeforeEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:690 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/scheduling/preemption.go:733 Feb 2 23:59:59.717: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. Feb 2 23:59:59.721: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints test/e2e/framework/framework.go:187 Feb 2 23:59:59.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-5145" for this suite. 02/02/23 23:59:59.74 [AfterEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:706 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Feb 2 23:59:59.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-764" for this suite. 02/02/23 23:59:59.756 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] test/e2e/scheduling/preemption.go:218 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:59:59.806 Feb 2 23:59:59.806: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 02/02/23 23:59:59.808 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:59:59.818 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:59:59.821 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Feb 2 23:59:59.833: INFO: Waiting up to 1m0s for all nodes to be ready Feb 3 00:00:59.857: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] test/e2e/scheduling/preemption.go:218 STEP: Create pods that use 4/5 of node resources. 02/03/23 00:00:59.861 Feb 3 00:00:59.881: INFO: Created pod: pod0-0-sched-preemption-low-priority Feb 3 00:00:59.885: INFO: Created pod: pod0-1-sched-preemption-medium-priority Feb 3 00:00:59.901: INFO: Created pod: pod1-0-sched-preemption-medium-priority Feb 3 00:00:59.905: INFO: Created pod: pod1-1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. 02/03/23 00:00:59.905 Feb 3 00:00:59.906: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-1434" to be "running" Feb 3 00:00:59.908: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.711483ms Feb 3 00:01:01.913: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007079971s Feb 3 00:01:03.914: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008094627s Feb 3 00:01:05.913: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007660891s Feb 3 00:01:07.914: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 8.008314685s Feb 3 00:01:09.912: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 10.006624832s Feb 3 00:01:11.914: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 12.00814547s Feb 3 00:01:11.914: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Feb 3 00:01:11.914: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-1434" to be "running" Feb 3 00:01:11.917: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.779945ms Feb 3 00:01:11.917: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Feb 3 00:01:11.917: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-1434" to be "running" Feb 3 00:01:11.920: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.988533ms Feb 3 00:01:11.920: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Feb 3 00:01:11.920: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-1434" to be "running" Feb 3 00:01:11.923: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.847892ms Feb 3 00:01:11.923: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" STEP: Run a critical pod that use same resources as that of a lower priority pod 02/03/23 00:01:11.923 Feb 3 00:01:11.932: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" Feb 3 00:01:11.935: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.104237ms Feb 3 00:01:13.940: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007671689s Feb 3 00:01:15.939: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.007392292s Feb 3 00:01:15.939: INFO: Pod "critical-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Feb 3 00:01:15.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1434" for this suite. 02/03/23 00:01:15.969 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","completed":20,"skipped":6329,"failed":0} ------------------------------ • [SLOW TEST] [76.201 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] test/e2e/scheduling/preemption.go:218 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/02/23 23:59:59.806 Feb 2 23:59:59.806: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-preemption 02/02/23 23:59:59.808 STEP: Waiting for a default service account to be provisioned in namespace 02/02/23 23:59:59.818 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/02/23 23:59:59.821 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Feb 2 23:59:59.833: INFO: Waiting up to 1m0s for all nodes to be ready Feb 3 00:00:59.857: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] test/e2e/scheduling/preemption.go:218 STEP: Create pods that use 4/5 of node resources. 02/03/23 00:00:59.861 Feb 3 00:00:59.881: INFO: Created pod: pod0-0-sched-preemption-low-priority Feb 3 00:00:59.885: INFO: Created pod: pod0-1-sched-preemption-medium-priority Feb 3 00:00:59.901: INFO: Created pod: pod1-0-sched-preemption-medium-priority Feb 3 00:00:59.905: INFO: Created pod: pod1-1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. 02/03/23 00:00:59.905 Feb 3 00:00:59.906: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-1434" to be "running" Feb 3 00:00:59.908: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.711483ms Feb 3 00:01:01.913: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007079971s Feb 3 00:01:03.914: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008094627s Feb 3 00:01:05.913: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007660891s Feb 3 00:01:07.914: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 8.008314685s Feb 3 00:01:09.912: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 10.006624832s Feb 3 00:01:11.914: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 12.00814547s Feb 3 00:01:11.914: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" Feb 3 00:01:11.914: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-1434" to be "running" Feb 3 00:01:11.917: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.779945ms Feb 3 00:01:11.917: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" Feb 3 00:01:11.917: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-1434" to be "running" Feb 3 00:01:11.920: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.988533ms Feb 3 00:01:11.920: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" Feb 3 00:01:11.920: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-1434" to be "running" Feb 3 00:01:11.923: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.847892ms Feb 3 00:01:11.923: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" STEP: Run a critical pod that use same resources as that of a lower priority pod 02/03/23 00:01:11.923 Feb 3 00:01:11.932: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" Feb 3 00:01:11.935: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.104237ms Feb 3 00:01:13.940: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007671689s Feb 3 00:01:15.939: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 4.007392292s Feb 3 00:01:15.939: INFO: Pod "critical-pod" satisfied condition "running" [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 Feb 3 00:01:15.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1434" for this suite. 02/03/23 00:01:15.969 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] test/e2e/scheduling/predicates.go:461 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 00:01:16.02 Feb 3 00:01:16.020: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/03/23 00:01:16.022 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 00:01:16.032 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 00:01:16.036 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 3 00:01:16.040: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 00:01:16.048: INFO: Waiting for terminating namespaces to be deleted... Feb 3 00:01:16.051: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 3 00:01:16.057: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 00:01:16.057: INFO: Container loopdev ready: true, restart count 0 Feb 3 00:01:16.057: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:01:16.057: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 00:01:16.057: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:01:16.057: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 00:01:16.057: INFO: pod0-1-sched-preemption-medium-priority from sched-preemption-1434 started at 2023-02-03 00:01:09 +0000 UTC (1 container statuses recorded) Feb 3 00:01:16.057: INFO: Container pod0-1-sched-preemption-medium-priority ready: true, restart count 0 Feb 3 00:01:16.057: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 3 00:01:16.063: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 00:01:16.063: INFO: Container loopdev ready: true, restart count 0 Feb 3 00:01:16.063: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:01:16.063: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 00:01:16.063: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:01:16.063: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 00:01:16.063: INFO: pod1-0-sched-preemption-medium-priority from sched-preemption-1434 started at 2023-02-03 00:01:00 +0000 UTC (1 container statuses recorded) Feb 3 00:01:16.063: INFO: Container pod1-0-sched-preemption-medium-priority ready: true, restart count 0 Feb 3 00:01:16.063: INFO: pod1-1-sched-preemption-medium-priority from sched-preemption-1434 started at 2023-02-03 00:01:00 +0000 UTC (1 container statuses recorded) Feb 3 00:01:16.063: INFO: Container pod1-1-sched-preemption-medium-priority ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] test/e2e/scheduling/predicates.go:461 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/03/23 00:01:16.063 Feb 3 00:01:16.070: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-595" to be "running" Feb 3 00:01:16.073: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.930533ms Feb 3 00:01:18.078: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007966575s Feb 3 00:01:18.078: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 00:01:18.081 STEP: Trying to apply a random label on the found node. 02/03/23 00:01:18.089 STEP: verifying the node has the label kubernetes.io/e2e-88a4438f-24c2-43e4-8911-96c81b9cb15c 42 02/03/23 00:01:18.1 STEP: Trying to relaunch the pod, now with labels. 02/03/23 00:01:18.103 Feb 3 00:01:18.107: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-595" to be "not pending" Feb 3 00:01:18.110: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 2.699008ms Feb 3 00:01:20.115: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 2.007270453s Feb 3 00:01:20.115: INFO: Pod "with-labels" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-88a4438f-24c2-43e4-8911-96c81b9cb15c off the node v125-worker2 02/03/23 00:01:20.117 STEP: verifying the node doesn't have the label kubernetes.io/e2e-88a4438f-24c2-43e4-8911-96c81b9cb15c 02/03/23 00:01:20.131 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 3 00:01:20.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-595" for this suite. 02/03/23 00:01:20.138 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","completed":21,"skipped":6513,"failed":0} ------------------------------ • [4.122 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] test/e2e/scheduling/predicates.go:461 Begin Captured GinkgoWriter Output >> [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:186 STEP: Creating a kubernetes client 02/03/23 00:01:16.02 Feb 3 00:01:16.020: INFO: >>> kubeConfig: /home/xtesting/.kube/config STEP: Building a namespace api object, basename sched-pred 02/03/23 00:01:16.022 STEP: Waiting for a default service account to be provisioned in namespace 02/03/23 00:01:16.032 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 02/03/23 00:01:16.036 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Feb 3 00:01:16.040: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 3 00:01:16.048: INFO: Waiting for terminating namespaces to be deleted... Feb 3 00:01:16.051: INFO: Logging pods the apiserver thinks is on node v125-worker before test Feb 3 00:01:16.057: INFO: create-loop-devs-d5nrm from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 00:01:16.057: INFO: Container loopdev ready: true, restart count 0 Feb 3 00:01:16.057: INFO: kindnet-xhfn8 from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:01:16.057: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 00:01:16.057: INFO: kube-proxy-pxrcg from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:01:16.057: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 00:01:16.057: INFO: pod0-1-sched-preemption-medium-priority from sched-preemption-1434 started at 2023-02-03 00:01:09 +0000 UTC (1 container statuses recorded) Feb 3 00:01:16.057: INFO: Container pod0-1-sched-preemption-medium-priority ready: true, restart count 0 Feb 3 00:01:16.057: INFO: Logging pods the apiserver thinks is on node v125-worker2 before test Feb 3 00:01:16.063: INFO: create-loop-devs-tlwgp from kube-system started at 2023-01-23 08:55:01 +0000 UTC (1 container statuses recorded) Feb 3 00:01:16.063: INFO: Container loopdev ready: true, restart count 0 Feb 3 00:01:16.063: INFO: kindnet-h8fbr from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:01:16.063: INFO: Container kindnet-cni ready: true, restart count 0 Feb 3 00:01:16.063: INFO: kube-proxy-bvl9x from kube-system started at 2023-01-23 08:54:56 +0000 UTC (1 container statuses recorded) Feb 3 00:01:16.063: INFO: Container kube-proxy ready: true, restart count 0 Feb 3 00:01:16.063: INFO: pod1-0-sched-preemption-medium-priority from sched-preemption-1434 started at 2023-02-03 00:01:00 +0000 UTC (1 container statuses recorded) Feb 3 00:01:16.063: INFO: Container pod1-0-sched-preemption-medium-priority ready: true, restart count 0 Feb 3 00:01:16.063: INFO: pod1-1-sched-preemption-medium-priority from sched-preemption-1434 started at 2023-02-03 00:01:00 +0000 UTC (1 container statuses recorded) Feb 3 00:01:16.063: INFO: Container pod1-1-sched-preemption-medium-priority ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] test/e2e/scheduling/predicates.go:461 STEP: Trying to launch a pod without a label to get a node which can launch it. 02/03/23 00:01:16.063 Feb 3 00:01:16.070: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-595" to be "running" Feb 3 00:01:16.073: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.930533ms Feb 3 00:01:18.078: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007966575s Feb 3 00:01:18.078: INFO: Pod "without-label" satisfied condition "running" STEP: Explicitly delete pod here to free the resource it takes. 02/03/23 00:01:18.081 STEP: Trying to apply a random label on the found node. 02/03/23 00:01:18.089 STEP: verifying the node has the label kubernetes.io/e2e-88a4438f-24c2-43e4-8911-96c81b9cb15c 42 02/03/23 00:01:18.1 STEP: Trying to relaunch the pod, now with labels. 02/03/23 00:01:18.103 Feb 3 00:01:18.107: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-595" to be "not pending" Feb 3 00:01:18.110: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 2.699008ms Feb 3 00:01:20.115: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 2.007270453s Feb 3 00:01:20.115: INFO: Pod "with-labels" satisfied condition "not pending" STEP: removing the label kubernetes.io/e2e-88a4438f-24c2-43e4-8911-96c81b9cb15c off the node v125-worker2 02/03/23 00:01:20.117 STEP: verifying the node doesn't have the label kubernetes.io/e2e-88a4438f-24c2-43e4-8911-96c81b9cb15c 02/03/23 00:01:20.131 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 Feb 3 00:01:20.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-595" for this suite. 02/03/23 00:01:20.138 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 << End Captured GinkgoWriter Output ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [SynchronizedAfterSuite] test/e2e/e2e.go:87 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:87 {"msg":"Test Suite completed","completed":21,"skipped":7045,"failed":0} Feb 3 00:01:20.178: INFO: Running AfterSuite actions on all nodes Feb 3 00:01:20.178: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func20.2 Feb 3 00:01:20.178: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func10.2 Feb 3 00:01:20.178: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Feb 3 00:01:20.178: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Feb 3 00:01:20.178: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Feb 3 00:01:20.178: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Feb 3 00:01:20.178: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:87 Feb 3 00:01:20.178: INFO: Running AfterSuite actions on node 1 Feb 3 00:01:20.178: INFO: Skipping dumping logs from cluster ------------------------------ [SynchronizedAfterSuite] PASSED [0.000 seconds] [SynchronizedAfterSuite] test/e2e/e2e.go:87 Begin Captured GinkgoWriter Output >> [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:87 Feb 3 00:01:20.178: INFO: Running AfterSuite actions on all nodes Feb 3 00:01:20.178: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func20.2 Feb 3 00:01:20.178: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func10.2 Feb 3 00:01:20.178: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Feb 3 00:01:20.178: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Feb 3 00:01:20.178: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Feb 3 00:01:20.178: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Feb 3 00:01:20.178: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 [SynchronizedAfterSuite] TOP-LEVEL test/e2e/e2e.go:87 Feb 3 00:01:20.178: INFO: Running AfterSuite actions on node 1 Feb 3 00:01:20.178: INFO: Skipping dumping logs from cluster << End Captured GinkgoWriter Output ------------------------------ [ReportAfterSuite] Kubernetes e2e suite report test/e2e/e2e_test.go:146 [ReportAfterSuite] TOP-LEVEL test/e2e/e2e_test.go:146 ------------------------------ [ReportAfterSuite] PASSED [0.000 seconds] [ReportAfterSuite] Kubernetes e2e suite report test/e2e/e2e_test.go:146 Begin Captured GinkgoWriter Output >> [ReportAfterSuite] TOP-LEVEL test/e2e/e2e_test.go:146 << End Captured GinkgoWriter Output ------------------------------ [ReportAfterSuite] Kubernetes e2e JUnit report test/e2e/framework/test_context.go:559 [ReportAfterSuite] TOP-LEVEL test/e2e/framework/test_context.go:559 ------------------------------ [ReportAfterSuite] PASSED [0.116 seconds] [ReportAfterSuite] Kubernetes e2e JUnit report test/e2e/framework/test_context.go:559 Begin Captured GinkgoWriter Output >> [ReportAfterSuite] TOP-LEVEL test/e2e/framework/test_context.go:559 << End Captured GinkgoWriter Output ------------------------------ Ran 21 of 7066 Specs in 730.948 seconds SUCCESS! -- 21 Passed | 0 Failed | 0 Pending | 7045 Skipped PASS Ginkgo ran 1 suite in 12m11.358360533s Test Suite Passed