I1006 14:35:10.574407 16 e2e.go:129] Starting e2e run "a0cbcd18-b993-467c-8b21-4d4a07078b06" on Ginkgo node 1 {"msg":"Test Suite starting","total":19,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1665066910 - Will randomize all specs Will run 19 of 6973 specs Oct 6 14:35:13.113: INFO: >>> kubeConfig: /root/.kube/config Oct 6 14:35:13.116: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 6 14:35:13.143: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 6 14:35:13.174: INFO: 15 / 15 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 6 14:35:13.174: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Oct 6 14:35:13.174: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 6 14:35:13.180: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Oct 6 14:35:13.180: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Oct 6 14:35:13.180: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 6 14:35:13.180: INFO: e2e test version: v1.24.6 Oct 6 14:35:13.182: INFO: kube-apiserver version: v1.24.6 Oct 6 14:35:13.182: INFO: >>> kubeConfig: /root/.kube/config Oct 6 14:35:13.187: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] test/e2e/framework/framework.go:652 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 STEP: Creating a kubernetes client Oct 6 14:35:13.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets Oct 6 14:35:13.216: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. W1006 14:35:13.216363 16 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should run and stop complex daemon [Conformance] test/e2e/framework/framework.go:652 Oct 6 14:35:13.238: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Oct 6 14:35:13.246: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:35:13.246: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set STEP: Change node label to blue, check that daemon pod is launched. Oct 6 14:35:13.263: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:35:13.263: INFO: Node v124-worker2 is running 0 daemon pod, expected 1 Oct 6 14:35:14.267: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:35:14.267: INFO: Node v124-worker2 is running 0 daemon pod, expected 1 Oct 6 14:35:15.268: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Oct 6 14:35:15.269: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set STEP: Update the node label to green, and wait for daemons to be unscheduled Oct 6 14:35:15.288: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Oct 6 14:35:15.288: INFO: Number of running nodes: 0, number of available pods: 1 in daemonset daemon-set Oct 6 14:35:16.292: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:35:16.292: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Oct 6 14:35:16.310: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:35:16.310: INFO: Node v124-worker2 is running 0 daemon pod, expected 1 Oct 6 14:35:17.315: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:35:17.315: INFO: Node v124-worker2 is running 0 daemon pod, expected 1 Oct 6 14:35:18.315: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:35:18.315: INFO: Node v124-worker2 is running 0 daemon pod, expected 1 Oct 6 14:35:19.315: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Oct 6 14:35:19.315: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9888, will wait for the garbage collector to delete the pods Oct 6 14:35:19.381: INFO: Deleting DaemonSet.extensions daemon-set took: 5.103357ms Oct 6 14:35:19.482: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.010621ms Oct 6 14:35:21.786: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:35:21.786: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Oct 6 14:35:21.793: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"79676"},"items":null} Oct 6 14:35:21.796: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"79676"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Oct 6 14:35:21.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9888" for this suite. • [SLOW TEST:8.653 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] test/e2e/framework/framework.go:652 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":19,"completed":1,"skipped":21,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] test/e2e/framework/framework.go:652 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 STEP: Creating a kubernetes client Oct 6 14:35:21.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] test/e2e/framework/framework.go:652 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:188 Oct 6 14:35:34.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9280" for this suite. STEP: Destroying namespace "nsdeletetest-6086" for this suite. Oct 6 14:35:34.949: INFO: Namespace nsdeletetest-6086 was already deleted STEP: Destroying namespace "nsdeletetest-2983" for this suite. • [SLOW TEST:13.107 seconds] [sig-api-machinery] Namespaces [Serial] test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] test/e2e/framework/framework.go:652 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":19,"completed":2,"skipped":360,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/framework/framework.go:652 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 STEP: Creating a kubernetes client Oct 6 14:35:34.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/framework/framework.go:652 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:188 Oct 6 14:35:41.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6320" for this suite. STEP: Destroying namespace "nsdeletetest-1778" for this suite. Oct 6 14:35:41.035: INFO: Namespace nsdeletetest-1778 was already deleted STEP: Destroying namespace "nsdeletetest-8352" for this suite. • [SLOW TEST:6.083 seconds] [sig-api-machinery] Namespaces [Serial] test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] test/e2e/framework/framework.go:652 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":19,"completed":3,"skipped":695,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] test/e2e/framework/framework.go:652 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 STEP: Creating a kubernetes client Oct 6 14:35:41.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Oct 6 14:35:41.080: INFO: Waiting up to 1m0s for all nodes to be ready Oct 6 14:36:41.107: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] test/e2e/framework/framework.go:652 STEP: Create pods that use 4/5 of node resources. Oct 6 14:36:41.143: INFO: Created pod: pod0-0-sched-preemption-low-priority Oct 6 14:36:41.156: INFO: Created pod: pod0-1-sched-preemption-medium-priority Oct 6 14:36:41.172: INFO: Created pod: pod1-0-sched-preemption-medium-priority Oct 6 14:36:41.176: INFO: Created pod: pod1-1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:188 Oct 6 14:36:55.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9816" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 • [SLOW TEST:74.231 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] test/e2e/framework/framework.go:652 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":19,"completed":4,"skipped":1064,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance] test/e2e/framework/framework.go:652 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 STEP: Creating a kubernetes client Oct 6 14:36:55.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should list and delete a collection of DaemonSets [Conformance] test/e2e/framework/framework.go:652 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 6 14:36:55.332: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:36:55.335: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:36:55.335: INFO: Node v124-worker is running 0 daemon pod, expected 1 Oct 6 14:36:56.340: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:36:56.344: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:36:56.345: INFO: Node v124-worker is running 0 daemon pod, expected 1 Oct 6 14:36:57.340: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:36:57.345: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Oct 6 14:36:57.345: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: listing all DeamonSets STEP: DeleteCollection of the DaemonSets STEP: Verify that ReplicaSets have been deleted [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 Oct 6 14:36:57.369: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"79999"},"items":null} Oct 6 14:36:57.373: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"79999"},"items":[{"metadata":{"name":"daemon-set-bv2g7","generateName":"daemon-set-","namespace":"daemonsets-3948","uid":"9cab980b-085b-44c4-93b8-07ff16a3784b","resourceVersion":"79997","creationTimestamp":"2022-10-06T14:36:55Z","labels":{"controller-revision-hash":"6df8db488c","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"a2e59f18-a5e5-4176-91e9-7921cacf42ee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-06T14:36:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2e59f18-a5e5-4176-91e9-7921cacf42ee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-06T14:36:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.181\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-k8frz","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-k8frz","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"v124-worker","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["v124-worker"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-10-06T14:36:55Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-10-06T14:36:56Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-10-06T14:36:56Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-10-06T14:36:55Z"}],"hostIP":"172.19.0.10","podIP":"10.244.2.181","podIPs":[{"ip":"10.244.2.181"}],"startTime":"2022-10-06T14:36:55Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2022-10-06T14:36:56Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://873f8781a686abae2cfb554f727b7761570fa711252f5a3226a5eff28a3a874f","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-s28cz","generateName":"daemon-set-","namespace":"daemonsets-3948","uid":"23443f95-0600-418b-9b48-19f28fc8dbf0","resourceVersion":"79995","creationTimestamp":"2022-10-06T14:36:55Z","labels":{"controller-revision-hash":"6df8db488c","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"a2e59f18-a5e5-4176-91e9-7921cacf42ee","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-10-06T14:36:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2e59f18-a5e5-4176-91e9-7921cacf42ee\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-10-06T14:36:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.223\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-g96m6","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-g96m6","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"v124-worker2","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["v124-worker2"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-10-06T14:36:55Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-10-06T14:36:56Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-10-06T14:36:56Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-10-06T14:36:55Z"}],"hostIP":"172.19.0.9","podIP":"10.244.1.223","podIPs":[{"ip":"10.244.1.223"}],"startTime":"2022-10-06T14:36:55Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2022-10-06T14:36:56Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://397038687447078413bcde1ffdcf3c87f797786e7a5498003a9c3faf72773ab4","started":true}],"qosClass":"BestEffort"}}]} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Oct 6 14:36:57.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3948" for this suite. •{"msg":"PASSED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","total":19,"completed":5,"skipped":1216,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance] test/e2e/framework/framework.go:652 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 STEP: Creating a kubernetes client Oct 6 14:36:57.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should verify changes to a daemon set status [Conformance] test/e2e/framework/framework.go:652 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 6 14:36:57.450: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:36:57.453: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:36:57.453: INFO: Node v124-worker is running 0 daemon pod, expected 1 Oct 6 14:36:58.458: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:36:58.462: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:36:58.462: INFO: Node v124-worker is running 0 daemon pod, expected 1 Oct 6 14:36:59.459: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:36:59.463: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Oct 6 14:36:59.463: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Getting /status Oct 6 14:36:59.471: INFO: Daemon Set daemon-set has Conditions: [] STEP: updating the DaemonSet Status Oct 6 14:36:59.481: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the daemon set status to be updated Oct 6 14:36:59.484: INFO: Observed &DaemonSet event: ADDED Oct 6 14:36:59.484: INFO: Observed &DaemonSet event: MODIFIED Oct 6 14:36:59.484: INFO: Observed &DaemonSet event: MODIFIED Oct 6 14:36:59.484: INFO: Observed &DaemonSet event: MODIFIED Oct 6 14:36:59.484: INFO: Found daemon set daemon-set in namespace daemonsets-6376 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Oct 6 14:36:59.484: INFO: Daemon set daemon-set has an updated status STEP: patching the DaemonSet Status STEP: watching for the daemon set status to be patched Oct 6 14:36:59.495: INFO: Observed &DaemonSet event: ADDED Oct 6 14:36:59.495: INFO: Observed &DaemonSet event: MODIFIED Oct 6 14:36:59.495: INFO: Observed &DaemonSet event: MODIFIED Oct 6 14:36:59.495: INFO: Observed &DaemonSet event: MODIFIED Oct 6 14:36:59.495: INFO: Observed daemon set daemon-set in namespace daemonsets-6376 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Oct 6 14:36:59.495: INFO: Observed &DaemonSet event: MODIFIED Oct 6 14:36:59.496: INFO: Found daemon set daemon-set in namespace daemonsets-6376 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] Oct 6 14:36:59.496: INFO: Daemon set daemon-set has a patched status [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6376, will wait for the garbage collector to delete the pods Oct 6 14:36:59.559: INFO: Deleting DaemonSet.extensions daemon-set took: 5.332639ms Oct 6 14:36:59.659: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.223336ms Oct 6 14:37:02.263: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:37:02.263: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Oct 6 14:37:02.266: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"80103"},"items":null} Oct 6 14:37:02.269: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"80103"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Oct 6 14:37:02.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6376" for this suite. •{"msg":"PASSED [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","total":19,"completed":6,"skipped":1794,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] test/e2e/framework/framework.go:652 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 STEP: Creating a kubernetes client Oct 6 14:37:02.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Oct 6 14:37:02.319: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 6 14:37:02.327: INFO: Waiting for terminating namespaces to be deleted... Oct 6 14:37:02.331: INFO: Logging pods the apiserver thinks is on node v124-worker before test Oct 6 14:37:02.337: INFO: create-loop-devs-l6gwt from kube-system started at 2022-10-06 06:58:35 +0000 UTC (1 container statuses recorded) Oct 6 14:37:02.337: INFO: Container loopdev ready: true, restart count 0 Oct 6 14:37:02.337: INFO: kindnet-mqx84 from kube-system started at 2022-10-06 06:58:31 +0000 UTC (1 container statuses recorded) Oct 6 14:37:02.337: INFO: Container kindnet-cni ready: true, restart count 0 Oct 6 14:37:02.337: INFO: kube-proxy-4zxs5 from kube-system started at 2022-10-06 06:58:31 +0000 UTC (1 container statuses recorded) Oct 6 14:37:02.337: INFO: Container kube-proxy ready: true, restart count 0 Oct 6 14:37:02.337: INFO: Logging pods the apiserver thinks is on node v124-worker2 before test Oct 6 14:37:02.343: INFO: create-loop-devs-bk5wm from kube-system started at 2022-10-06 06:58:35 +0000 UTC (1 container statuses recorded) Oct 6 14:37:02.343: INFO: Container loopdev ready: true, restart count 0 Oct 6 14:37:02.343: INFO: kindnet-j57gv from kube-system started at 2022-10-06 06:58:31 +0000 UTC (1 container statuses recorded) Oct 6 14:37:02.343: INFO: Container kindnet-cni ready: true, restart count 0 Oct 6 14:37:02.343: INFO: kube-proxy-jgqmb from kube-system started at 2022-10-06 06:58:31 +0000 UTC (1 container statuses recorded) Oct 6 14:37:02.343: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] test/e2e/framework/framework.go:652 Oct 6 14:47:03.460: INFO: Timed out waiting for the following pods to schedule Oct 6 14:47:03.461: INFO: kube-bench-m4t95/kube-bench-master-67mmg Oct 6 14:47:03.461: INFO: kube-bench-wj542/kube-bench-master-mx42f Oct 6 14:47:03.461: FAIL: Timed out after 10m0s waiting for stable cluster. Full Stack Trace k8s.io/kubernetes/test/e2e/scheduling.glob..func4.5() test/e2e/scheduling/predicates.go:327 +0x8b k8s.io/kubernetes/test/e2e.RunE2ETests(0x2562497?) test/e2e/e2e.go:130 +0x686 k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000e26ea0, 0x73ba620) /usr/local/go/src/testing/testing.go:1439 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1486 +0x35f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:188 STEP: Collecting events from namespace "sched-pred-3339". STEP: Found 0 events. Oct 6 14:47:03.469: INFO: POD NODE PHASE GRACE CONDITIONS Oct 6 14:47:03.470: INFO: Oct 6 14:47:03.474: INFO: Logging node info for node v124-control-plane Oct 6 14:47:03.477: INFO: Node Info: &Node{ObjectMeta:{v124-control-plane 9e92b19a-b40b-4e13-9e93-d3686109465e 80681 0 2022-10-06 06:58:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v124-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-10-06 06:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-06 06:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-10-06 06:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-10-06 06:58:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v124/v124-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-06 14:43:26 +0000 UTC,LastTransitionTime:2022-10-06 06:58:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-06 14:43:26 +0000 UTC,LastTransitionTime:2022-10-06 06:58:03 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-06 14:43:26 +0000 UTC,LastTransitionTime:2022-10-06 06:58:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-06 14:43:26 +0000 UTC,LastTransitionTime:2022-10-06 06:58:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.19.0.8,},NodeAddress{Type:Hostname,Address:v124-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:85a0ec6791694f75b335cd06ddf2ecff,SystemUUID:b1a2237d-40f3-41a1-9c1d-f9e885e994c2,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.24.6,KubeProxyVersion:v1.24.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:38a947b3fed294c7fda4862b8ca7dda89cfc33b3f15f213093880099151f6ce9 k8s.gcr.io/kube-proxy:v1.24.6],SizeBytes:111859564,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.3-0],SizeBytes:102143581,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:08e71ff3b532961f442bf22b8eab67d6889e11e1fcfdcdbafa0d7507bb876e95 k8s.gcr.io/kube-apiserver:v1.24.6],SizeBytes:77291997,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:b29805ceeb0be039cc781fcd7b29ecde8ecb828ace06caa20b70f7f570f960b3 k8s.gcr.io/kube-controller-manager:v1.24.6],SizeBytes:65568879,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:3b61d866a53a412219510fd92c280467266d348de5d779ee1712ef51cd84bfd8 k8s.gcr.io/kube-scheduler:v1.24.6],SizeBytes:52338799,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 6 14:47:03.478: INFO: Logging kubelet events for node v124-control-plane Oct 6 14:47:03.481: INFO: Logging pods the kubelet thinks is on node v124-control-plane Oct 6 14:47:03.539: INFO: coredns-6d4b75cb6d-7tbh5 started at 2022-10-06 06:58:33 +0000 UTC (0+1 container statuses recorded) Oct 6 14:47:03.539: INFO: Container coredns ready: true, restart count 0 Oct 6 14:47:03.539: INFO: kube-apiserver-v124-control-plane started at 2022-10-06 06:58:13 +0000 UTC (0+1 container statuses recorded) Oct 6 14:47:03.539: INFO: Container kube-apiserver ready: true, restart count 0 Oct 6 14:47:03.539: INFO: kube-controller-manager-v124-control-plane started at 2022-10-06 06:58:12 +0000 UTC (0+1 container statuses recorded) Oct 6 14:47:03.539: INFO: Container kube-controller-manager ready: true, restart count 0 Oct 6 14:47:03.539: INFO: kube-scheduler-v124-control-plane started at 2022-10-06 06:58:12 +0000 UTC (0+1 container statuses recorded) Oct 6 14:47:03.539: INFO: Container kube-scheduler ready: true, restart count 0 Oct 6 14:47:03.539: INFO: kindnet-7q2l8 started at 2022-10-06 06:58:25 +0000 UTC (0+1 container statuses recorded) Oct 6 14:47:03.539: INFO: Container kindnet-cni ready: true, restart count 0 Oct 6 14:47:03.539: INFO: kube-proxy-c79zn started at 2022-10-06 06:58:25 +0000 UTC (0+1 container statuses recorded) Oct 6 14:47:03.539: INFO: Container kube-proxy ready: true, restart count 0 Oct 6 14:47:03.539: INFO: etcd-v124-control-plane started at 2022-10-06 06:58:13 +0000 UTC (0+1 container statuses recorded) Oct 6 14:47:03.539: INFO: Container etcd ready: true, restart count 0 Oct 6 14:47:03.539: INFO: coredns-6d4b75cb6d-rq2nz started at 2022-10-06 06:58:33 +0000 UTC (0+1 container statuses recorded) Oct 6 14:47:03.539: INFO: Container coredns ready: true, restart count 0 Oct 6 14:47:03.539: INFO: local-path-provisioner-6b84c5c67f-blz9z started at 2022-10-06 06:58:33 +0000 UTC (0+1 container statuses recorded) Oct 6 14:47:03.539: INFO: Container local-path-provisioner ready: true, restart count 0 Oct 6 14:47:03.539: INFO: create-loop-devs-tdsp5 started at 2022-10-06 06:58:35 +0000 UTC (0+1 container statuses recorded) Oct 6 14:47:03.539: INFO: Container loopdev ready: true, restart count 0 Oct 6 14:47:03.637: INFO: Latency metrics for node v124-control-plane Oct 6 14:47:03.637: INFO: Logging node info for node v124-worker Oct 6 14:47:03.641: INFO: Node Info: &Node{ObjectMeta:{v124-worker c29cd886-d42a-4971-add5-b8b5e08d6925 80981 0 2022-10-06 06:58:30 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v124-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-10-06 06:58:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-10-06 06:58:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-06 06:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {e2e.test Update v1 2022-10-06 14:36:41 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}} status} {kubelet Update v1 2022-10-06 14:36:47 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v124/v124-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {} 5 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {} 5 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-06 14:47:01 +0000 UTC,LastTransitionTime:2022-10-06 06:58:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-06 14:47:01 +0000 UTC,LastTransitionTime:2022-10-06 06:58:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-06 14:47:01 +0000 UTC,LastTransitionTime:2022-10-06 06:58:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-06 14:47:01 +0000 UTC,LastTransitionTime:2022-10-06 06:58:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.19.0.10,},NodeAddress{Type:Hostname,Address:v124-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c82e5534232b4be8997ff8d4be661abc,SystemUUID:7cdeb474-b03a-4619-8deb-a9fd51c2d665,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.24.6,KubeProxyVersion:v1.24.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:38a947b3fed294c7fda4862b8ca7dda89cfc33b3f15f213093880099151f6ce9 k8s.gcr.io/kube-proxy:v1.24.6],SizeBytes:111859564,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.3-0],SizeBytes:102143581,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:08e71ff3b532961f442bf22b8eab67d6889e11e1fcfdcdbafa0d7507bb876e95 k8s.gcr.io/kube-apiserver:v1.24.6],SizeBytes:77291997,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:b29805ceeb0be039cc781fcd7b29ecde8ecb828ace06caa20b70f7f570f960b3 k8s.gcr.io/kube-controller-manager:v1.24.6],SizeBytes:65568879,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:3b61d866a53a412219510fd92c280467266d348de5d779ee1712ef51cd84bfd8 k8s.gcr.io/kube-scheduler:v1.24.6],SizeBytes:52338799,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:f9c93b92b6ff750b41a93c4e4fe0bfe384597aeb841e2539d5444815c55b2d8f k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5],SizeBytes:24316368,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 6 14:47:03.642: INFO: Logging kubelet events for node v124-worker Oct 6 14:47:03.645: INFO: Logging pods the kubelet thinks is on node v124-worker Oct 6 14:47:03.670: INFO: kube-proxy-4zxs5 started at 2022-10-06 06:58:31 +0000 UTC (0+1 container statuses recorded) Oct 6 14:47:03.670: INFO: Container kube-proxy ready: true, restart count 0 Oct 6 14:47:03.670: INFO: create-loop-devs-l6gwt started at 2022-10-06 06:58:35 +0000 UTC (0+1 container statuses recorded) Oct 6 14:47:03.670: INFO: Container loopdev ready: true, restart count 0 Oct 6 14:47:03.670: INFO: kindnet-mqx84 started at 2022-10-06 06:58:31 +0000 UTC (0+1 container statuses recorded) Oct 6 14:47:03.670: INFO: Container kindnet-cni ready: true, restart count 0 Oct 6 14:47:03.742: INFO: Latency metrics for node v124-worker Oct 6 14:47:03.742: INFO: Logging node info for node v124-worker2 Oct 6 14:47:03.746: INFO: Node Info: &Node{ObjectMeta:{v124-worker2 91b1d551-b462-461b-8abe-51808e375f04 80982 0 2022-10-06 06:58:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v124-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-10-06 06:58:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2022-10-06 06:58:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-06 06:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {e2e.test Update v1 2022-10-06 14:36:41 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}} status} {kubelet Update v1 2022-10-06 14:36:49 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:scheduling.k8s.io/foo":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v124/v124-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {} 5 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{5 0} {} 5 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-06 14:47:01 +0000 UTC,LastTransitionTime:2022-10-06 06:58:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-06 14:47:01 +0000 UTC,LastTransitionTime:2022-10-06 06:58:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-06 14:47:01 +0000 UTC,LastTransitionTime:2022-10-06 06:58:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-06 14:47:01 +0000 UTC,LastTransitionTime:2022-10-06 06:58:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.19.0.9,},NodeAddress{Type:Hostname,Address:v124-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:be39b32785464dfd813f83e6eadce16f,SystemUUID:72d12092-4bad-4831-878b-10805e1f24d7,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.24.6,KubeProxyVersion:v1.24.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:38a947b3fed294c7fda4862b8ca7dda89cfc33b3f15f213093880099151f6ce9 k8s.gcr.io/kube-proxy:v1.24.6],SizeBytes:111859564,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.3-0],SizeBytes:102143581,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:08e71ff3b532961f442bf22b8eab67d6889e11e1fcfdcdbafa0d7507bb876e95 k8s.gcr.io/kube-apiserver:v1.24.6],SizeBytes:77291997,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:b29805ceeb0be039cc781fcd7b29ecde8ecb828ace06caa20b70f7f570f960b3 k8s.gcr.io/kube-controller-manager:v1.24.6],SizeBytes:65568879,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:3b61d866a53a412219510fd92c280467266d348de5d779ee1712ef51cd84bfd8 k8s.gcr.io/kube-scheduler:v1.24.6],SizeBytes:52338799,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 6 14:47:03.746: INFO: Logging kubelet events for node v124-worker2 Oct 6 14:47:03.749: INFO: Logging pods the kubelet thinks is on node v124-worker2 Oct 6 14:47:03.773: INFO: create-loop-devs-bk5wm started at 2022-10-06 06:58:35 +0000 UTC (0+1 container statuses recorded) Oct 6 14:47:03.773: INFO: Container loopdev ready: true, restart count 0 Oct 6 14:47:03.773: INFO: kube-proxy-jgqmb started at 2022-10-06 06:58:31 +0000 UTC (0+1 container statuses recorded) Oct 6 14:47:03.773: INFO: Container kube-proxy ready: true, restart count 0 Oct 6 14:47:03.774: INFO: kindnet-j57gv started at 2022-10-06 06:58:31 +0000 UTC (0+1 container statuses recorded) Oct 6 14:47:03.774: INFO: Container kindnet-cni ready: true, restart count 0 Oct 6 14:47:03.847: INFO: Latency metrics for node v124-worker2 Oct 6 14:47:03.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3339" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 • Failure [601.561 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] [It] test/e2e/framework/framework.go:652 Oct 6 14:47:03.461: Timed out after 10m0s waiting for stable cluster. test/e2e/scheduling/predicates.go:327 ------------------------------ {"msg":"FAILED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":19,"completed":6,"skipped":2265,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] test/e2e/framework/framework.go:652 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 STEP: Creating a kubernetes client Oct 6 14:47:03.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should run and stop simple daemon [Conformance] test/e2e/framework/framework.go:652 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 6 14:47:03.913: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:47:03.916: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:47:03.916: INFO: Node v124-worker is running 0 daemon pod, expected 1 Oct 6 14:47:04.922: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:47:04.926: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:47:04.926: INFO: Node v124-worker is running 0 daemon pod, expected 1 Oct 6 14:47:05.922: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:47:05.926: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Oct 6 14:47:05.926: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Stop a daemon pod, check that the daemon pod is revived. Oct 6 14:47:05.944: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:47:05.948: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Oct 6 14:47:05.948: INFO: Node v124-worker is running 0 daemon pod, expected 1 Oct 6 14:47:06.954: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:47:06.958: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Oct 6 14:47:06.958: INFO: Node v124-worker is running 0 daemon pod, expected 1 Oct 6 14:47:07.954: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:47:07.959: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Oct 6 14:47:07.959: INFO: Node v124-worker is running 0 daemon pod, expected 1 Oct 6 14:47:08.953: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:47:08.958: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Oct 6 14:47:08.958: INFO: Node v124-worker is running 0 daemon pod, expected 1 Oct 6 14:47:09.954: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:47:09.958: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Oct 6 14:47:09.958: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6391, will wait for the garbage collector to delete the pods Oct 6 14:47:10.019: INFO: Deleting DaemonSet.extensions daemon-set took: 4.632877ms Oct 6 14:47:10.120: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.932375ms Oct 6 14:47:12.325: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:47:12.325: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Oct 6 14:47:12.328: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"81056"},"items":null} Oct 6 14:47:12.331: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"81056"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Oct 6 14:47:12.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6391" for this suite. • [SLOW TEST:8.489 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] test/e2e/framework/framework.go:652 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":19,"completed":7,"skipped":2632,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] test/e2e/framework/framework.go:652 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 STEP: Creating a kubernetes client Oct 6 14:47:12.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should retry creating failed daemon pods [Conformance] test/e2e/framework/framework.go:652 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 6 14:47:12.408: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:47:12.411: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:47:12.411: INFO: Node v124-worker is running 0 daemon pod, expected 1 Oct 6 14:47:13.416: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:47:13.420: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Oct 6 14:47:13.420: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Oct 6 14:47:13.438: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:47:13.442: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Oct 6 14:47:13.442: INFO: Node v124-worker2 is running 0 daemon pod, expected 1 Oct 6 14:47:14.448: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:47:14.451: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Oct 6 14:47:14.451: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4839, will wait for the garbage collector to delete the pods Oct 6 14:47:14.518: INFO: Deleting DaemonSet.extensions daemon-set took: 5.387233ms Oct 6 14:47:14.618: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.159091ms Oct 6 14:47:18.322: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:47:18.322: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Oct 6 14:47:18.326: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"81146"},"items":null} Oct 6 14:47:18.329: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"81146"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Oct 6 14:47:18.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4839" for this suite. • [SLOW TEST:5.993 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] test/e2e/framework/framework.go:652 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":19,"completed":8,"skipped":2923,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/framework/framework.go:652 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 STEP: Creating a kubernetes client Oct 6 14:47:18.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Oct 6 14:47:18.388: INFO: Waiting up to 1m0s for all nodes to be ready Oct 6 14:48:18.415: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints test/e2e/framework/framework.go:187 STEP: Creating a kubernetes client Oct 6 14:48:18.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:690 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/framework/framework.go:652 Oct 6 14:48:18.459: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Oct 6 14:48:18.463: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints test/e2e/framework/framework.go:188 Oct 6 14:48:18.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-6415" for this suite. [AfterEach] PriorityClass endpoints test/e2e/scheduling/preemption.go:706 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:188 Oct 6 14:48:18.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5603" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 • [SLOW TEST:60.193 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 PriorityClass endpoints test/e2e/scheduling/preemption.go:683 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] test/e2e/framework/framework.go:652 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":19,"completed":9,"skipped":3063,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]"]} [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] test/e2e/framework/framework.go:652 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 STEP: Creating a kubernetes client Oct 6 14:48:18.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should rollback without unnecessary restarts [Conformance] test/e2e/framework/framework.go:652 Oct 6 14:48:18.587: INFO: Create a RollingUpdate DaemonSet Oct 6 14:48:18.592: INFO: Check that daemon pods launch on every node of the cluster Oct 6 14:48:18.597: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:48:18.600: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:48:18.600: INFO: Node v124-worker is running 0 daemon pod, expected 1 Oct 6 14:48:19.605: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:48:19.610: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Oct 6 14:48:19.610: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set Oct 6 14:48:19.610: INFO: Update the DaemonSet to trigger a rollout Oct 6 14:48:19.619: INFO: Updating DaemonSet daemon-set Oct 6 14:48:22.644: INFO: Roll back the DaemonSet before rollout is complete Oct 6 14:48:22.653: INFO: Updating DaemonSet daemon-set Oct 6 14:48:22.653: INFO: Make sure DaemonSet rollback is complete Oct 6 14:48:22.657: INFO: Wrong image for pod: daemon-set-v9bnr. Expected: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2, got: foo:non-existent. Oct 6 14:48:22.657: INFO: Pod daemon-set-v9bnr is not available Oct 6 14:48:22.662: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:48:23.671: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:48:24.672: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:48:25.667: INFO: Pod daemon-set-n98vm is not available Oct 6 14:48:25.672: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1008, will wait for the garbage collector to delete the pods Oct 6 14:48:25.738: INFO: Deleting DaemonSet.extensions daemon-set took: 4.805051ms Oct 6 14:48:25.838: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.119663ms Oct 6 14:48:28.541: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:48:28.541: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Oct 6 14:48:28.544: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"81382"},"items":null} Oct 6 14:48:28.548: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"81382"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Oct 6 14:48:28.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1008" for this suite. • [SLOW TEST:10.023 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] test/e2e/framework/framework.go:652 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":19,"completed":10,"skipped":3063,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] test/e2e/framework/framework.go:652 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 STEP: Creating a kubernetes client Oct 6 14:48:28.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Oct 6 14:48:28.601: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 6 14:48:28.609: INFO: Waiting for terminating namespaces to be deleted... Oct 6 14:48:28.612: INFO: Logging pods the apiserver thinks is on node v124-worker before test Oct 6 14:48:28.618: INFO: create-loop-devs-l6gwt from kube-system started at 2022-10-06 06:58:35 +0000 UTC (1 container statuses recorded) Oct 6 14:48:28.618: INFO: Container loopdev ready: true, restart count 0 Oct 6 14:48:28.618: INFO: kindnet-mqx84 from kube-system started at 2022-10-06 06:58:31 +0000 UTC (1 container statuses recorded) Oct 6 14:48:28.618: INFO: Container kindnet-cni ready: true, restart count 0 Oct 6 14:48:28.618: INFO: kube-proxy-4zxs5 from kube-system started at 2022-10-06 06:58:31 +0000 UTC (1 container statuses recorded) Oct 6 14:48:28.618: INFO: Container kube-proxy ready: true, restart count 0 Oct 6 14:48:28.618: INFO: Logging pods the apiserver thinks is on node v124-worker2 before test Oct 6 14:48:28.624: INFO: create-loop-devs-bk5wm from kube-system started at 2022-10-06 06:58:35 +0000 UTC (1 container statuses recorded) Oct 6 14:48:28.624: INFO: Container loopdev ready: true, restart count 0 Oct 6 14:48:28.624: INFO: kindnet-j57gv from kube-system started at 2022-10-06 06:58:31 +0000 UTC (1 container statuses recorded) Oct 6 14:48:28.624: INFO: Container kindnet-cni ready: true, restart count 0 Oct 6 14:48:28.624: INFO: kube-proxy-jgqmb from kube-system started at 2022-10-06 06:58:31 +0000 UTC (1 container statuses recorded) Oct 6 14:48:28.624: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] test/e2e/framework/framework.go:652 STEP: Trying to schedule Pod with nonempty NodeSelector. Oct 6 14:58:29.707: INFO: Timed out waiting for the following pods to schedule Oct 6 14:58:29.707: INFO: kube-bench-m4t95/kube-bench-master-67mmg Oct 6 14:58:29.707: INFO: kube-bench-wj542/kube-bench-master-mx42f Oct 6 14:58:29.707: FAIL: Timed out after 10m0s waiting for stable cluster. Full Stack Trace k8s.io/kubernetes/test/e2e/scheduling.glob..func4.6() test/e2e/scheduling/predicates.go:442 +0x85 k8s.io/kubernetes/test/e2e.RunE2ETests(0x2562497?) test/e2e/e2e.go:130 +0x686 k8s.io/kubernetes/test/e2e.TestE2E(0x0?) test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000e26ea0, 0x73ba620) /usr/local/go/src/testing/testing.go:1439 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1486 +0x35f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:188 STEP: Collecting events from namespace "sched-pred-1643". STEP: Found 0 events. Oct 6 14:58:29.715: INFO: POD NODE PHASE GRACE CONDITIONS Oct 6 14:58:29.715: INFO: Oct 6 14:58:29.719: INFO: Logging node info for node v124-control-plane Oct 6 14:58:29.722: INFO: Node Info: &Node{ObjectMeta:{v124-control-plane 9e92b19a-b40b-4e13-9e93-d3686109465e 81852 0 2022-10-06 06:58:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v124-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-10-06 06:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-06 06:58:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-10-06 06:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-10-06 06:58:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v124/v124-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-06 14:53:38 +0000 UTC,LastTransitionTime:2022-10-06 06:58:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-06 14:53:38 +0000 UTC,LastTransitionTime:2022-10-06 06:58:03 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-06 14:53:38 +0000 UTC,LastTransitionTime:2022-10-06 06:58:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-06 14:53:38 +0000 UTC,LastTransitionTime:2022-10-06 06:58:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.19.0.8,},NodeAddress{Type:Hostname,Address:v124-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:85a0ec6791694f75b335cd06ddf2ecff,SystemUUID:b1a2237d-40f3-41a1-9c1d-f9e885e994c2,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.24.6,KubeProxyVersion:v1.24.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:38a947b3fed294c7fda4862b8ca7dda89cfc33b3f15f213093880099151f6ce9 k8s.gcr.io/kube-proxy:v1.24.6],SizeBytes:111859564,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.3-0],SizeBytes:102143581,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:08e71ff3b532961f442bf22b8eab67d6889e11e1fcfdcdbafa0d7507bb876e95 k8s.gcr.io/kube-apiserver:v1.24.6],SizeBytes:77291997,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:b29805ceeb0be039cc781fcd7b29ecde8ecb828ace06caa20b70f7f570f960b3 k8s.gcr.io/kube-controller-manager:v1.24.6],SizeBytes:65568879,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:3b61d866a53a412219510fd92c280467266d348de5d779ee1712ef51cd84bfd8 k8s.gcr.io/kube-scheduler:v1.24.6],SizeBytes:52338799,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[registry.k8s.io/pause:3.7],SizeBytes:311278,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 6 14:58:29.723: INFO: Logging kubelet events for node v124-control-plane Oct 6 14:58:29.727: INFO: Logging pods the kubelet thinks is on node v124-control-plane Oct 6 14:58:29.774: INFO: kube-scheduler-v124-control-plane started at 2022-10-06 06:58:12 +0000 UTC (0+1 container statuses recorded) Oct 6 14:58:29.774: INFO: Container kube-scheduler ready: true, restart count 0 Oct 6 14:58:29.774: INFO: kindnet-7q2l8 started at 2022-10-06 06:58:25 +0000 UTC (0+1 container statuses recorded) Oct 6 14:58:29.774: INFO: Container kindnet-cni ready: true, restart count 0 Oct 6 14:58:29.774: INFO: kube-proxy-c79zn started at 2022-10-06 06:58:25 +0000 UTC (0+1 container statuses recorded) Oct 6 14:58:29.774: INFO: Container kube-proxy ready: true, restart count 0 Oct 6 14:58:29.774: INFO: coredns-6d4b75cb6d-7tbh5 started at 2022-10-06 06:58:33 +0000 UTC (0+1 container statuses recorded) Oct 6 14:58:29.774: INFO: Container coredns ready: true, restart count 0 Oct 6 14:58:29.774: INFO: kube-apiserver-v124-control-plane started at 2022-10-06 06:58:13 +0000 UTC (0+1 container statuses recorded) Oct 6 14:58:29.774: INFO: Container kube-apiserver ready: true, restart count 0 Oct 6 14:58:29.774: INFO: kube-controller-manager-v124-control-plane started at 2022-10-06 06:58:12 +0000 UTC (0+1 container statuses recorded) Oct 6 14:58:29.774: INFO: Container kube-controller-manager ready: true, restart count 0 Oct 6 14:58:29.774: INFO: local-path-provisioner-6b84c5c67f-blz9z started at 2022-10-06 06:58:33 +0000 UTC (0+1 container statuses recorded) Oct 6 14:58:29.774: INFO: Container local-path-provisioner ready: true, restart count 0 Oct 6 14:58:29.774: INFO: create-loop-devs-tdsp5 started at 2022-10-06 06:58:35 +0000 UTC (0+1 container statuses recorded) Oct 6 14:58:29.774: INFO: Container loopdev ready: true, restart count 0 Oct 6 14:58:29.774: INFO: etcd-v124-control-plane started at 2022-10-06 06:58:13 +0000 UTC (0+1 container statuses recorded) Oct 6 14:58:29.774: INFO: Container etcd ready: true, restart count 0 Oct 6 14:58:29.774: INFO: coredns-6d4b75cb6d-rq2nz started at 2022-10-06 06:58:33 +0000 UTC (0+1 container statuses recorded) Oct 6 14:58:29.774: INFO: Container coredns ready: true, restart count 0 Oct 6 14:58:29.858: INFO: Latency metrics for node v124-control-plane Oct 6 14:58:29.858: INFO: Logging node info for node v124-worker Oct 6 14:58:29.861: INFO: Node Info: &Node{ObjectMeta:{v124-worker c29cd886-d42a-4971-add5-b8b5e08d6925 81836 0 2022-10-06 06:58:30 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v124-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-10-06 06:58:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-10-06 06:58:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-06 06:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-10-06 14:48:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v124/v124-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-06 14:53:27 +0000 UTC,LastTransitionTime:2022-10-06 06:58:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-06 14:53:27 +0000 UTC,LastTransitionTime:2022-10-06 06:58:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-06 14:53:27 +0000 UTC,LastTransitionTime:2022-10-06 06:58:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-06 14:53:27 +0000 UTC,LastTransitionTime:2022-10-06 06:58:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.19.0.10,},NodeAddress{Type:Hostname,Address:v124-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c82e5534232b4be8997ff8d4be661abc,SystemUUID:7cdeb474-b03a-4619-8deb-a9fd51c2d665,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.24.6,KubeProxyVersion:v1.24.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:38a947b3fed294c7fda4862b8ca7dda89cfc33b3f15f213093880099151f6ce9 k8s.gcr.io/kube-proxy:v1.24.6],SizeBytes:111859564,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.3-0],SizeBytes:102143581,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:08e71ff3b532961f442bf22b8eab67d6889e11e1fcfdcdbafa0d7507bb876e95 k8s.gcr.io/kube-apiserver:v1.24.6],SizeBytes:77291997,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:b29805ceeb0be039cc781fcd7b29ecde8ecb828ace06caa20b70f7f570f960b3 k8s.gcr.io/kube-controller-manager:v1.24.6],SizeBytes:65568879,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:3b61d866a53a412219510fd92c280467266d348de5d779ee1712ef51cd84bfd8 k8s.gcr.io/kube-scheduler:v1.24.6],SizeBytes:52338799,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:f9c93b92b6ff750b41a93c4e4fe0bfe384597aeb841e2539d5444815c55b2d8f k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5],SizeBytes:24316368,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 6 14:58:29.862: INFO: Logging kubelet events for node v124-worker Oct 6 14:58:29.865: INFO: Logging pods the kubelet thinks is on node v124-worker Oct 6 14:58:29.888: INFO: kube-proxy-4zxs5 started at 2022-10-06 06:58:31 +0000 UTC (0+1 container statuses recorded) Oct 6 14:58:29.888: INFO: Container kube-proxy ready: true, restart count 0 Oct 6 14:58:29.888: INFO: create-loop-devs-l6gwt started at 2022-10-06 06:58:35 +0000 UTC (0+1 container statuses recorded) Oct 6 14:58:29.888: INFO: Container loopdev ready: true, restart count 0 Oct 6 14:58:29.888: INFO: kindnet-mqx84 started at 2022-10-06 06:58:31 +0000 UTC (0+1 container statuses recorded) Oct 6 14:58:29.888: INFO: Container kindnet-cni ready: true, restart count 0 Oct 6 14:58:29.952: INFO: Latency metrics for node v124-worker Oct 6 14:58:29.952: INFO: Logging node info for node v124-worker2 Oct 6 14:58:29.955: INFO: Node Info: &Node{ObjectMeta:{v124-worker2 91b1d551-b462-461b-8abe-51808e375f04 81839 0 2022-10-06 06:58:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v124-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-10-06 06:58:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubelet Update v1 2022-10-06 06:58:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-10-06 06:58:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-10-06 14:48:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v124/v124-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470559055872 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67412094976 0} {} 65832124Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-10-06 14:53:28 +0000 UTC,LastTransitionTime:2022-10-06 06:58:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-10-06 14:53:28 +0000 UTC,LastTransitionTime:2022-10-06 06:58:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-10-06 14:53:28 +0000 UTC,LastTransitionTime:2022-10-06 06:58:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-10-06 14:53:28 +0000 UTC,LastTransitionTime:2022-10-06 06:58:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.19.0.9,},NodeAddress{Type:Hostname,Address:v124-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:be39b32785464dfd813f83e6eadce16f,SystemUUID:72d12092-4bad-4831-878b-10805e1f24d7,BootID:5bd644eb-fc54-4a37-951a-44566369b55e,KernelVersion:5.15.0-48-generic,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.8,KubeletVersion:v1.24.6,KubeProxyVersion:v1.24.6,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:38a947b3fed294c7fda4862b8ca7dda89cfc33b3f15f213093880099151f6ce9 k8s.gcr.io/kube-proxy:v1.24.6],SizeBytes:111859564,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.3-0],SizeBytes:102143581,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:08e71ff3b532961f442bf22b8eab67d6889e11e1fcfdcdbafa0d7507bb876e95 k8s.gcr.io/kube-apiserver:v1.24.6],SizeBytes:77291997,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:b29805ceeb0be039cc781fcd7b29ecde8ecb828ace06caa20b70f7f570f960b3 k8s.gcr.io/kube-controller-manager:v1.24.6],SizeBytes:65568879,},ContainerImage{Names:[docker.io/library/import-2022-09-22@sha256:3b61d866a53a412219510fd92c280467266d348de5d779ee1712ef51cd84bfd8 k8s.gcr.io/kube-scheduler:v1.24.6],SizeBytes:52338799,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220726-ed811e41],SizeBytes:25818452,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220607-9a4d8d2a],SizeBytes:2859509,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],SizeBytes:311278,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 6 14:58:29.955: INFO: Logging kubelet events for node v124-worker2 Oct 6 14:58:29.958: INFO: Logging pods the kubelet thinks is on node v124-worker2 Oct 6 14:58:29.980: INFO: kube-proxy-jgqmb started at 2022-10-06 06:58:31 +0000 UTC (0+1 container statuses recorded) Oct 6 14:58:29.981: INFO: Container kube-proxy ready: true, restart count 0 Oct 6 14:58:29.981: INFO: kindnet-j57gv started at 2022-10-06 06:58:31 +0000 UTC (0+1 container statuses recorded) Oct 6 14:58:29.981: INFO: Container kindnet-cni ready: true, restart count 0 Oct 6 14:58:29.981: INFO: create-loop-devs-bk5wm started at 2022-10-06 06:58:35 +0000 UTC (0+1 container statuses recorded) Oct 6 14:58:29.981: INFO: Container loopdev ready: true, restart count 0 Oct 6 14:58:30.044: INFO: Latency metrics for node v124-worker2 Oct 6 14:58:30.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1643" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 • Failure [601.476 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if not matching [Conformance] [It] test/e2e/framework/framework.go:652 Oct 6 14:58:29.707: Timed out after 10m0s waiting for stable cluster. test/e2e/scheduling/predicates.go:442 ------------------------------ {"msg":"FAILED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":19,"completed":10,"skipped":3558,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] test/e2e/framework/framework.go:652 [BeforeEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:187 STEP: Creating a kubernetes client Oct 6 14:58:30.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should patch a Namespace [Conformance] test/e2e/framework/framework.go:652 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] test/e2e/framework/framework.go:188 Oct 6 14:58:30.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7408" for this suite. STEP: Destroying namespace "nspatchtest-72f99664-21d5-41e2-8a47-7d45dd2d295d-6667" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":19,"completed":11,"skipped":4081,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] test/e2e/framework/framework.go:652 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 STEP: Creating a kubernetes client Oct 6 14:58:30.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Oct 6 14:58:30.165: INFO: Waiting up to 1m0s for all nodes to be ready Oct 6 14:59:30.191: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath test/e2e/framework/framework.go:187 STEP: Creating a kubernetes client Oct 6 14:59:30.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:496 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Oct 6 14:59:32.246: INFO: found a healthy node: v124-worker [It] runs ReplicaSets to verify preemption running path [Conformance] test/e2e/framework/framework.go:652 Oct 6 14:59:40.317: INFO: pods created so far: [1 1 1] Oct 6 14:59:40.317: INFO: length of pods created so far: 3 Oct 6 14:59:42.324: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath test/e2e/framework/framework.go:188 Oct 6 14:59:49.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-9474" for this suite. [AfterEach] PreemptionExecutionPath test/e2e/scheduling/preemption.go:470 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:188 Oct 6 14:59:49.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-6685" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 • [SLOW TEST:79.280 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 PreemptionExecutionPath test/e2e/scheduling/preemption.go:458 runs ReplicaSets to verify preemption running path [Conformance] test/e2e/framework/framework.go:652 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":19,"completed":12,"skipped":4261,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/framework/framework.go:652 [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:187 STEP: Creating a kubernetes client Oct 6 14:59:49.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:145 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/framework/framework.go:652 Oct 6 14:59:49.471: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Oct 6 14:59:49.481: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:59:49.484: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:59:49.484: INFO: Node v124-worker is running 0 daemon pod, expected 1 Oct 6 14:59:50.489: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:59:50.493: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 14:59:50.493: INFO: Node v124-worker is running 0 daemon pod, expected 1 Oct 6 14:59:51.491: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:59:51.495: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Oct 6 14:59:51.495: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Oct 6 14:59:51.524: INFO: Wrong image for pod: daemon-set-qz8vv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Oct 6 14:59:51.524: INFO: Wrong image for pod: daemon-set-tbh42. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Oct 6 14:59:51.529: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:59:52.534: INFO: Wrong image for pod: daemon-set-qz8vv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Oct 6 14:59:52.538: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:59:53.534: INFO: Wrong image for pod: daemon-set-qz8vv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Oct 6 14:59:53.537: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:59:54.531: INFO: Pod daemon-set-btdbc is not available Oct 6 14:59:54.531: INFO: Wrong image for pod: daemon-set-qz8vv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.39, got: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2. Oct 6 14:59:54.533: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:59:55.537: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:59:56.540: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:59:57.538: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:59:58.532: INFO: Pod daemon-set-w69kj is not available Oct 6 14:59:58.535: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Oct 6 14:59:58.539: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:59:58.541: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 Oct 6 14:59:58.541: INFO: Node v124-worker is running 0 daemon pod, expected 1 Oct 6 14:59:59.548: INFO: DaemonSet pods can't tolerate node v124-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:} {Key:node-role.kubernetes.io/control-plane Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 6 14:59:59.552: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 Oct 6 14:59:59.552: INFO: Number of running nodes: 2, number of available pods: 2 in daemonset daemon-set [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/apps/daemon_set.go:110 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5683, will wait for the garbage collector to delete the pods Oct 6 14:59:59.629: INFO: Deleting DaemonSet.extensions daemon-set took: 5.927338ms Oct 6 14:59:59.730: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.931169ms Oct 6 15:00:02.033: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 Oct 6 15:00:02.033: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set Oct 6 15:00:02.036: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"82665"},"items":null} Oct 6 15:00:02.038: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"82665"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] test/e2e/framework/framework.go:188 Oct 6 15:00:02.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5683" for this suite. • [SLOW TEST:12.619 seconds] [sig-apps] Daemon set [Serial] test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] test/e2e/framework/framework.go:652 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":19,"completed":13,"skipped":5320,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/framework/framework.go:652 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 STEP: Creating a kubernetes client Oct 6 15:00:02.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Oct 6 15:00:02.078: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 6 15:00:02.084: INFO: Waiting for terminating namespaces to be deleted... Oct 6 15:00:02.086: INFO: Logging pods the apiserver thinks is on node v124-worker before test Oct 6 15:00:02.091: INFO: create-loop-devs-l6gwt from kube-system started at 2022-10-06 06:58:35 +0000 UTC (1 container statuses recorded) Oct 6 15:00:02.091: INFO: Container loopdev ready: true, restart count 0 Oct 6 15:00:02.091: INFO: kindnet-mqx84 from kube-system started at 2022-10-06 06:58:31 +0000 UTC (1 container statuses recorded) Oct 6 15:00:02.091: INFO: Container kindnet-cni ready: true, restart count 0 Oct 6 15:00:02.091: INFO: kube-proxy-4zxs5 from kube-system started at 2022-10-06 06:58:31 +0000 UTC (1 container statuses recorded) Oct 6 15:00:02.091: INFO: Container kube-proxy ready: true, restart count 0 Oct 6 15:00:02.091: INFO: Logging pods the apiserver thinks is on node v124-worker2 before test Oct 6 15:00:02.095: INFO: create-loop-devs-bk5wm from kube-system started at 2022-10-06 06:58:35 +0000 UTC (1 container statuses recorded) Oct 6 15:00:02.095: INFO: Container loopdev ready: true, restart count 0 Oct 6 15:00:02.095: INFO: kindnet-j57gv from kube-system started at 2022-10-06 06:58:31 +0000 UTC (1 container statuses recorded) Oct 6 15:00:02.095: INFO: Container kindnet-cni ready: true, restart count 0 Oct 6 15:00:02.095: INFO: kube-proxy-jgqmb from kube-system started at 2022-10-06 06:58:31 +0000 UTC (1 container statuses recorded) Oct 6 15:00:02.095: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/framework/framework.go:652 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-481133c3-bfb7-4dac-984b-0dd241a9e93c 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.19.0.10 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-481133c3-bfb7-4dac-984b-0dd241a9e93c off the node v124-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-481133c3-bfb7-4dac-984b-0dd241a9e93c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:188 Oct 6 15:05:06.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5445" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 • [SLOW TEST:304.144 seconds] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] test/e2e/framework/framework.go:652 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":19,"completed":14,"skipped":5673,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]"]} SSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] test/e2e/framework/framework.go:652 [BeforeEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/framework.go:187 STEP: Creating a kubernetes client Oct 6 15:05:06.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] test/e2e/framework/framework.go:652 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Oct 6 15:05:06.556: INFO: Pod name wrapped-volume-race-4242461f-e103-480e-b2e7-ce8cb2b87c3f: Found 1 pods out of 5 Oct 6 15:05:11.567: INFO: Pod name wrapped-volume-race-4242461f-e103-480e-b2e7-ce8cb2b87c3f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4242461f-e103-480e-b2e7-ce8cb2b87c3f in namespace emptydir-wrapper-9466, will wait for the garbage collector to delete the pods Oct 6 15:05:21.658: INFO: Deleting ReplicationController wrapped-volume-race-4242461f-e103-480e-b2e7-ce8cb2b87c3f took: 7.705949ms Oct 6 15:05:21.759: INFO: Terminating ReplicationController wrapped-volume-race-4242461f-e103-480e-b2e7-ce8cb2b87c3f pods took: 101.105759ms STEP: Creating RC which spawns configmap-volume pods Oct 6 15:05:25.679: INFO: Pod name wrapped-volume-race-93f74889-5ca3-4c86-9363-b063c0a7b52a: Found 0 pods out of 5 Oct 6 15:05:30.687: INFO: Pod name wrapped-volume-race-93f74889-5ca3-4c86-9363-b063c0a7b52a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-93f74889-5ca3-4c86-9363-b063c0a7b52a in namespace emptydir-wrapper-9466, will wait for the garbage collector to delete the pods Oct 6 15:05:42.777: INFO: Deleting ReplicationController wrapped-volume-race-93f74889-5ca3-4c86-9363-b063c0a7b52a took: 6.282166ms Oct 6 15:05:42.878: INFO: Terminating ReplicationController wrapped-volume-race-93f74889-5ca3-4c86-9363-b063c0a7b52a pods took: 101.078338ms STEP: Creating RC which spawns configmap-volume pods Oct 6 15:05:45.600: INFO: Pod name wrapped-volume-race-c7570ca0-1fe3-4401-b2f3-a7a324b3090e: Found 0 pods out of 5 Oct 6 15:05:50.609: INFO: Pod name wrapped-volume-race-c7570ca0-1fe3-4401-b2f3-a7a324b3090e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c7570ca0-1fe3-4401-b2f3-a7a324b3090e in namespace emptydir-wrapper-9466, will wait for the garbage collector to delete the pods Oct 6 15:06:00.699: INFO: Deleting ReplicationController wrapped-volume-race-c7570ca0-1fe3-4401-b2f3-a7a324b3090e took: 6.162925ms Oct 6 15:06:00.800: INFO: Terminating ReplicationController wrapped-volume-race-c7570ca0-1fe3-4401-b2f3-a7a324b3090e pods took: 100.598553ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes test/e2e/framework/framework.go:188 Oct 6 15:06:03.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9466" for this suite. • [SLOW TEST:57.145 seconds] [sig-storage] EmptyDir wrapper volumes test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] test/e2e/framework/framework.go:652 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":19,"completed":15,"skipped":5679,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] test/e2e/framework/framework.go:652 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:187 STEP: Creating a kubernetes client Oct 6 15:06:03.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:92 Oct 6 15:06:03.385: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 6 15:06:03.394: INFO: Waiting for terminating namespaces to be deleted... Oct 6 15:06:03.397: INFO: Logging pods the apiserver thinks is on node v124-worker before test Oct 6 15:06:03.404: INFO: create-loop-devs-l6gwt from kube-system started at 2022-10-06 06:58:35 +0000 UTC (1 container statuses recorded) Oct 6 15:06:03.404: INFO: Container loopdev ready: true, restart count 0 Oct 6 15:06:03.404: INFO: kindnet-mqx84 from kube-system started at 2022-10-06 06:58:31 +0000 UTC (1 container statuses recorded) Oct 6 15:06:03.404: INFO: Container kindnet-cni ready: true, restart count 0 Oct 6 15:06:03.404: INFO: kube-proxy-4zxs5 from kube-system started at 2022-10-06 06:58:31 +0000 UTC (1 container statuses recorded) Oct 6 15:06:03.404: INFO: Container kube-proxy ready: true, restart count 0 Oct 6 15:06:03.404: INFO: Logging pods the apiserver thinks is on node v124-worker2 before test Oct 6 15:06:03.410: INFO: create-loop-devs-bk5wm from kube-system started at 2022-10-06 06:58:35 +0000 UTC (1 container statuses recorded) Oct 6 15:06:03.410: INFO: Container loopdev ready: true, restart count 0 Oct 6 15:06:03.410: INFO: kindnet-j57gv from kube-system started at 2022-10-06 06:58:31 +0000 UTC (1 container statuses recorded) Oct 6 15:06:03.410: INFO: Container kindnet-cni ready: true, restart count 0 Oct 6 15:06:03.410: INFO: kube-proxy-jgqmb from kube-system started at 2022-10-06 06:58:31 +0000 UTC (1 container statuses recorded) Oct 6 15:06:03.410: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] test/e2e/framework/framework.go:652 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5f89861a-d427-46d9-82c8-c955f6e2cc2e 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-5f89861a-d427-46d9-82c8-c955f6e2cc2e off the node v124-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-5f89861a-d427-46d9-82c8-c955f6e2cc2e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/framework/framework.go:188 Oct 6 15:06:07.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-83" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] test/e2e/scheduling/predicates.go:83 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":19,"completed":16,"skipped":6109,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] test/e2e/framework/framework.go:652 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:187 STEP: Creating a kubernetes client Oct 6 15:06:07.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace STEP: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:92 Oct 6 15:06:07.532: INFO: Waiting up to 1m0s for all nodes to be ready Oct 6 15:07:07.563: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] test/e2e/framework/framework.go:652 STEP: Create pods that use 4/5 of node resources. Oct 6 15:07:07.591: INFO: Created pod: pod0-0-sched-preemption-low-priority Oct 6 15:07:07.596: INFO: Created pod: pod0-1-sched-preemption-medium-priority Oct 6 15:07:07.612: INFO: Created pod: pod1-0-sched-preemption-medium-priority Oct 6 15:07:07.616: INFO: Created pod: pod1-1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/framework/framework.go:188 Oct 6 15:07:21.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2594" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/preemption.go:80 • [SLOW TEST:74.240 seconds] [sig-scheduling] SchedulerPreemption [Serial] test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] test/e2e/framework/framework.go:652 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":19,"completed":17,"skipped":6335,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 6 15:07:21.748: INFO: Running AfterSuite actions on all nodes Oct 6 15:07:21.748: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2 Oct 6 15:07:21.748: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2 Oct 6 15:07:21.748: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2 Oct 6 15:07:21.748: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3 Oct 6 15:07:21.748: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2 Oct 6 15:07:21.748: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2 Oct 6 15:07:21.748: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3 Oct 6 15:07:21.748: INFO: Running AfterSuite actions on node 1 Oct 6 15:07:21.748: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance_serial/junit_01.xml {"msg":"Test Suite completed","total":19,"completed":17,"skipped":6954,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]"]} Summarizing 2 Failures: [Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates resource limits of pods that are allowed to run [Conformance] test/e2e/scheduling/predicates.go:327 [Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates that NodeSelector is respected if not matching [Conformance] test/e2e/scheduling/predicates.go:442 Ran 19 of 6973 Specs in 1928.640 seconds FAIL! -- 17 Passed | 2 Failed | 0 Pending | 6954 Skipped --- FAIL: TestE2E (1931.25s) FAIL Ginkgo ran 1 suite in 32m11.371059235s Test Suite Failed