I0520 15:22:46.118370 17 e2e.go:129] Starting e2e run "99a0ae25-fe4a-4d46-b285-a6d0209d8ee1" on Ginkgo node 1 {"msg":"Test Suite starting","total":13,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1621524164 - Will randomize all specs Will run 13 of 5771 specs May 20 15:22:46.147: INFO: >>> kubeConfig: /root/.kube/config May 20 15:22:46.150: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 20 15:22:46.176: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 20 15:22:46.227: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 20 15:22:46.227: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 20 15:22:46.227: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 20 15:22:46.238: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) May 20 15:22:46.239: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 20 15:22:46.239: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) May 20 15:22:46.239: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 20 15:22:46.239: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) May 20 15:22:46.239: INFO: e2e test version: v1.21.1 May 20 15:22:46.240: INFO: kube-apiserver version: v1.21.0 May 20 15:22:46.240: INFO: >>> kubeConfig: /root/.kube/config May 20 15:22:46.249: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 15:22:46.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption W0520 15:22:46.295908 17 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 15:22:46.296: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 15:22:46.305: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 20 15:22:46.320: INFO: Waiting up to 1m0s for all nodes to be ready May 20 15:23:46.780: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node v1.21-worker2. STEP: Apply 10 fake resource to node v1.21-worker. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node v1.21-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node v1.21-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 15:24:25.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-6273" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:99.162 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":13,"completed":1,"skipped":209,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 15:24:25.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 15:24:25.449: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 15:24:25.457: INFO: Waiting for terminating namespaces to be deleted... May 20 15:24:25.460: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 20 15:24:25.470: INFO: create-loop-devs-965k2 from kube-system started at 2021-05-16 10:45:24 +0000 UTC (1 container statuses recorded) May 20 15:24:25.470: INFO: Container loopdev ready: true, restart count 0 May 20 15:24:25.470: INFO: kindnet-2qtxh from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:24:25.470: INFO: Container kindnet-cni ready: true, restart count 1 May 20 15:24:25.470: INFO: kube-multus-ds-xst78 from kube-system started at 2021-05-16 10:45:26 +0000 UTC (1 container statuses recorded) May 20 15:24:25.470: INFO: Container kube-multus ready: true, restart count 0 May 20 15:24:25.470: INFO: kube-proxy-42vmb from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:24:25.470: INFO: Container kube-proxy ready: true, restart count 0 May 20 15:24:25.470: INFO: tune-sysctls-jcgnq from kube-system started at 2021-05-16 10:45:25 +0000 UTC (1 container statuses recorded) May 20 15:24:25.470: INFO: Container setsysctls ready: true, restart count 0 May 20 15:24:25.470: INFO: dashboard-metrics-scraper-856586f554-75x2x from kubernetes-dashboard started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 15:24:25.470: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 20 15:24:25.470: INFO: kubernetes-dashboard-78c79f97b4-w25tg from kubernetes-dashboard started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:24:25.470: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 20 15:24:25.470: INFO: controller-675995489c-xnj8v from metallb-system started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:24:25.470: INFO: Container controller ready: true, restart count 0 May 20 15:24:25.470: INFO: speaker-g5b8b from metallb-system started at 2021-05-16 10:45:27 +0000 UTC (1 container statuses recorded) May 20 15:24:25.470: INFO: Container speaker ready: true, restart count 0 May 20 15:24:25.470: INFO: contour-74948c9879-8866g from projectcontour started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 15:24:25.470: INFO: Container contour ready: true, restart count 0 May 20 15:24:25.470: INFO: contour-74948c9879-cqqjf from projectcontour started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:24:25.470: INFO: Container contour ready: true, restart count 0 May 20 15:24:25.470: INFO: high from sched-preemption-6273 started at 2021-05-20 15:24:07 +0000 UTC (1 container statuses recorded) May 20 15:24:25.470: INFO: Container high ready: true, restart count 0 May 20 15:24:25.470: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 20 15:24:25.479: INFO: coredns-558bd4d5db-r5ppk from kube-system started at 2021-05-20 14:31:04 +0000 UTC (1 container statuses recorded) May 20 15:24:25.479: INFO: Container coredns ready: true, restart count 0 May 20 15:24:25.479: INFO: coredns-558bd4d5db-xg8b5 from kube-system started at 2021-05-20 14:31:04 +0000 UTC (1 container statuses recorded) May 20 15:24:25.479: INFO: Container coredns ready: true, restart count 0 May 20 15:24:25.479: INFO: create-loop-devs-jq69v from kube-system started at 2021-05-20 14:05:01 +0000 UTC (1 container statuses recorded) May 20 15:24:25.479: INFO: Container loopdev ready: true, restart count 0 May 20 15:24:25.479: INFO: kindnet-xkwvl from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:24:25.479: INFO: Container kindnet-cni ready: true, restart count 1 May 20 15:24:25.479: INFO: kube-multus-ds-4pmk4 from kube-system started at 2021-05-20 14:04:43 +0000 UTC (1 container statuses recorded) May 20 15:24:25.479: INFO: Container kube-multus ready: true, restart count 0 May 20 15:24:25.479: INFO: kube-proxy-gh4rd from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:24:25.479: INFO: Container kube-proxy ready: true, restart count 0 May 20 15:24:25.479: INFO: tune-sysctls-pgxh4 from kube-system started at 2021-05-20 14:04:33 +0000 UTC (1 container statuses recorded) May 20 15:24:25.479: INFO: Container setsysctls ready: true, restart count 0 May 20 15:24:25.479: INFO: speaker-67fwk from metallb-system started at 2021-05-20 14:04:33 +0000 UTC (1 container statuses recorded) May 20 15:24:25.479: INFO: Container speaker ready: true, restart count 0 May 20 15:24:25.479: INFO: low-1 from sched-preemption-6273 started at 2021-05-20 15:24:11 +0000 UTC (1 container statuses recorded) May 20 15:24:25.479: INFO: Container low-1 ready: true, restart count 0 May 20 15:24:25.479: INFO: medium from sched-preemption-6273 started at 2021-05-20 15:24:23 +0000 UTC (1 container statuses recorded) May 20 15:24:25.479: INFO: Container medium ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 May 20 15:24:25.512: INFO: Pod coredns-558bd4d5db-r5ppk requesting local ephemeral resource =0 on Node v1.21-worker2 May 20 15:24:25.512: INFO: Pod coredns-558bd4d5db-xg8b5 requesting local ephemeral resource =0 on Node v1.21-worker2 May 20 15:24:25.512: INFO: Pod create-loop-devs-965k2 requesting local ephemeral resource =0 on Node v1.21-worker May 20 15:24:25.512: INFO: Pod create-loop-devs-jq69v requesting local ephemeral resource =0 on Node v1.21-worker2 May 20 15:24:25.512: INFO: Pod kindnet-2qtxh requesting local ephemeral resource =0 on Node v1.21-worker May 20 15:24:25.512: INFO: Pod kindnet-xkwvl requesting local ephemeral resource =0 on Node v1.21-worker2 May 20 15:24:25.512: INFO: Pod kube-multus-ds-4pmk4 requesting local ephemeral resource =0 on Node v1.21-worker2 May 20 15:24:25.512: INFO: Pod kube-multus-ds-xst78 requesting local ephemeral resource =0 on Node v1.21-worker May 20 15:24:25.512: INFO: Pod kube-proxy-42vmb requesting local ephemeral resource =0 on Node v1.21-worker May 20 15:24:25.512: INFO: Pod kube-proxy-gh4rd requesting local ephemeral resource =0 on Node v1.21-worker2 May 20 15:24:25.512: INFO: Pod tune-sysctls-jcgnq requesting local ephemeral resource =0 on Node v1.21-worker May 20 15:24:25.512: INFO: Pod tune-sysctls-pgxh4 requesting local ephemeral resource =0 on Node v1.21-worker2 May 20 15:24:25.512: INFO: Pod dashboard-metrics-scraper-856586f554-75x2x requesting local ephemeral resource =0 on Node v1.21-worker May 20 15:24:25.512: INFO: Pod kubernetes-dashboard-78c79f97b4-w25tg requesting local ephemeral resource =0 on Node v1.21-worker May 20 15:24:25.512: INFO: Pod controller-675995489c-xnj8v requesting local ephemeral resource =0 on Node v1.21-worker May 20 15:24:25.512: INFO: Pod speaker-67fwk requesting local ephemeral resource =0 on Node v1.21-worker2 May 20 15:24:25.512: INFO: Pod speaker-g5b8b requesting local ephemeral resource =0 on Node v1.21-worker May 20 15:24:25.512: INFO: Pod contour-74948c9879-8866g requesting local ephemeral resource =0 on Node v1.21-worker May 20 15:24:25.512: INFO: Pod contour-74948c9879-cqqjf requesting local ephemeral resource =0 on Node v1.21-worker May 20 15:24:25.512: INFO: Pod high requesting local ephemeral resource =0 on Node v1.21-worker May 20 15:24:25.512: INFO: Pod low-1 requesting local ephemeral resource =0 on Node v1.21-worker2 May 20 15:24:25.512: INFO: Pod medium requesting local ephemeral resource =0 on Node v1.21-worker2 May 20 15:24:25.512: INFO: Using pod capacity: 47063248896 May 20 15:24:25.512: INFO: Node: v1.21-worker has local ephemeral resource allocatable: 470632488960 May 20 15:24:25.512: INFO: Node: v1.21-worker2 has local ephemeral resource allocatable: 470632488960 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one May 20 15:24:25.600: INFO: Waiting for running... May 20 15:34:25.717: FAIL: Unexpected error: <*errors.errorString | 0xc00430a090>: { s: "Error waiting for 20 pods to be running - probably a timeout: Timeout while waiting for pods with labels \"startPodsID=922928b8-ab1e-4f13-bf5d-716cbd8d6bd9\" to be running", } Error waiting for 20 pods to be running - probably a timeout: Timeout while waiting for pods with labels "startPodsID=922928b8-ab1e-4f13-bf5d-716cbd8d6bd9" to be running occurred Full Stack Trace k8s.io/kubernetes/test/e2e/scheduling.glob..func4.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:165 +0x108d k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000460a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000460a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000460a80, 0x70acc78) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "sched-pred-2274". STEP: Found 105 events. May 20 15:34:25.785: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for overcommit-0: { } Scheduled: Successfully assigned sched-pred-2274/overcommit-0 to v1.21-worker2 May 20 15:34:25.785: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for overcommit-1: { } Scheduled: Successfully assigned sched-pred-2274/overcommit-1 to v1.21-worker2 May 20 15:34:25.785: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for overcommit-10: { } Scheduled: Successfully assigned sched-pred-2274/overcommit-10 to v1.21-worker May 20 15:34:25.785: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for overcommit-11: { } Scheduled: Successfully assigned sched-pred-2274/overcommit-11 to v1.21-worker May 20 15:34:25.785: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for overcommit-12: { } Scheduled: Successfully assigned sched-pred-2274/overcommit-12 to v1.21-worker2 May 20 15:34:25.786: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for overcommit-13: { } Scheduled: Successfully assigned sched-pred-2274/overcommit-13 to v1.21-worker2 May 20 15:34:25.786: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for overcommit-14: { } Scheduled: Successfully assigned sched-pred-2274/overcommit-14 to v1.21-worker May 20 15:34:25.786: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for overcommit-15: { } Scheduled: Successfully assigned sched-pred-2274/overcommit-15 to v1.21-worker May 20 15:34:25.786: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for overcommit-16: { } Scheduled: Successfully assigned sched-pred-2274/overcommit-16 to v1.21-worker2 May 20 15:34:25.786: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for overcommit-17: { } Scheduled: Successfully assigned sched-pred-2274/overcommit-17 to v1.21-worker May 20 15:34:25.786: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for overcommit-18: { } Scheduled: Successfully assigned sched-pred-2274/overcommit-18 to v1.21-worker May 20 15:34:25.786: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for overcommit-19: { } Scheduled: Successfully assigned sched-pred-2274/overcommit-19 to v1.21-worker May 20 15:34:25.786: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for overcommit-2: { } Scheduled: Successfully assigned sched-pred-2274/overcommit-2 to v1.21-worker2 May 20 15:34:25.786: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for overcommit-3: { } Scheduled: Successfully assigned sched-pred-2274/overcommit-3 to v1.21-worker2 May 20 15:34:25.786: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for overcommit-4: { } Scheduled: Successfully assigned sched-pred-2274/overcommit-4 to v1.21-worker2 May 20 15:34:25.786: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for overcommit-5: { } Scheduled: Successfully assigned sched-pred-2274/overcommit-5 to v1.21-worker May 20 15:34:25.786: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for overcommit-6: { } Scheduled: Successfully assigned sched-pred-2274/overcommit-6 to v1.21-worker May 20 15:34:25.786: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for overcommit-7: { } Scheduled: Successfully assigned sched-pred-2274/overcommit-7 to v1.21-worker2 May 20 15:34:25.786: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for overcommit-8: { } Scheduled: Successfully assigned sched-pred-2274/overcommit-8 to v1.21-worker2 May 20 15:34:25.786: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for overcommit-9: { } Scheduled: Successfully assigned sched-pred-2274/overcommit-9 to v1.21-worker May 20 15:34:25.786: INFO: At 2021-05-20 15:24:26 +0000 UTC - event for overcommit-0: {multus } AddedInterface: Add eth0 [10.244.2.50/24] May 20 15:34:25.786: INFO: At 2021-05-20 15:24:26 +0000 UTC - event for overcommit-0: {kubelet v1.21-worker2} Created: Created container overcommit-0 May 20 15:34:25.786: INFO: At 2021-05-20 15:24:26 +0000 UTC - event for overcommit-0: {kubelet v1.21-worker2} Pulled: Container image "k8s.gcr.io/pause:3.4.1" already present on machine May 20 15:34:25.786: INFO: At 2021-05-20 15:24:26 +0000 UTC - event for overcommit-11: {kubelet v1.21-worker} Pulled: Container image "k8s.gcr.io/pause:3.4.1" already present on machine May 20 15:34:25.786: INFO: At 2021-05-20 15:24:26 +0000 UTC - event for overcommit-11: {multus } AddedInterface: Add eth0 [10.244.1.101/24] May 20 15:34:25.786: INFO: At 2021-05-20 15:24:26 +0000 UTC - event for overcommit-11: {kubelet v1.21-worker} Created: Created container overcommit-11 May 20 15:34:25.786: INFO: At 2021-05-20 15:24:26 +0000 UTC - event for overcommit-13: {kubelet v1.21-worker2} Started: Started container overcommit-13 May 20 15:34:25.786: INFO: At 2021-05-20 15:24:26 +0000 UTC - event for overcommit-13: {multus } AddedInterface: Add eth0 [10.244.2.47/24] May 20 15:34:25.786: INFO: At 2021-05-20 15:24:26 +0000 UTC - event for overcommit-13: {kubelet v1.21-worker2} Pulled: Container image "k8s.gcr.io/pause:3.4.1" already present on machine May 20 15:34:25.786: INFO: At 2021-05-20 15:24:26 +0000 UTC - event for overcommit-13: {kubelet v1.21-worker2} Created: Created container overcommit-13 May 20 15:34:25.786: INFO: At 2021-05-20 15:24:26 +0000 UTC - event for overcommit-17: {kubelet v1.21-worker} Started: Started container overcommit-17 May 20 15:34:25.786: INFO: At 2021-05-20 15:24:26 +0000 UTC - event for overcommit-17: {kubelet v1.21-worker} Created: Created container overcommit-17 May 20 15:34:25.786: INFO: At 2021-05-20 15:24:26 +0000 UTC - event for overcommit-17: {kubelet v1.21-worker} Pulled: Container image "k8s.gcr.io/pause:3.4.1" already present on machine May 20 15:34:25.786: INFO: At 2021-05-20 15:24:26 +0000 UTC - event for overcommit-17: {multus } AddedInterface: Add eth0 [10.244.1.100/24] May 20 15:34:25.786: INFO: At 2021-05-20 15:24:26 +0000 UTC - event for overcommit-2: {kubelet v1.21-worker2} Pulled: Container image "k8s.gcr.io/pause:3.4.1" already present on machine May 20 15:34:25.786: INFO: At 2021-05-20 15:24:26 +0000 UTC - event for overcommit-2: {multus } AddedInterface: Add eth0 [10.244.2.49/24] May 20 15:34:25.786: INFO: At 2021-05-20 15:24:26 +0000 UTC - event for overcommit-2: {kubelet v1.21-worker2} Started: Started container overcommit-2 May 20 15:34:25.786: INFO: At 2021-05-20 15:24:26 +0000 UTC - event for overcommit-2: {kubelet v1.21-worker2} Created: Created container overcommit-2 May 20 15:34:25.786: INFO: At 2021-05-20 15:24:26 +0000 UTC - event for overcommit-4: {multus } AddedInterface: Add eth0 [10.244.2.60/24] May 20 15:34:25.786: INFO: At 2021-05-20 15:24:26 +0000 UTC - event for overcommit-4: {kubelet v1.21-worker2} Pulled: Container image "k8s.gcr.io/pause:3.4.1" already present on machine May 20 15:34:25.786: INFO: At 2021-05-20 15:24:26 +0000 UTC - event for overcommit-4: {kubelet v1.21-worker2} Created: Created container overcommit-4 May 20 15:34:25.786: INFO: At 2021-05-20 15:24:27 +0000 UTC - event for overcommit-0: {kubelet v1.21-worker2} Started: Started container overcommit-0 May 20 15:34:25.786: INFO: At 2021-05-20 15:24:27 +0000 UTC - event for overcommit-10: {multus } AddedInterface: Add eth0 [10.244.1.145/24] May 20 15:34:25.786: INFO: At 2021-05-20 15:24:27 +0000 UTC - event for overcommit-11: {kubelet v1.21-worker} Started: Started container overcommit-11 May 20 15:34:25.786: INFO: At 2021-05-20 15:24:27 +0000 UTC - event for overcommit-12: {multus } AddedInterface: Add eth0 [10.244.2.73/24] May 20 15:34:25.786: INFO: At 2021-05-20 15:24:27 +0000 UTC - event for overcommit-15: {multus } AddedInterface: Add eth0 [10.244.1.220/24] May 20 15:34:25.786: INFO: At 2021-05-20 15:24:27 +0000 UTC - event for overcommit-16: {multus } AddedInterface: Add eth0 [10.244.2.86/24] May 20 15:34:25.786: INFO: At 2021-05-20 15:24:27 +0000 UTC - event for overcommit-19: {multus } AddedInterface: Add eth0 [10.244.1.218/24] May 20 15:34:25.786: INFO: At 2021-05-20 15:24:27 +0000 UTC - event for overcommit-3: {multus } AddedInterface: Add eth0 [10.244.2.72/24] May 20 15:34:25.786: INFO: At 2021-05-20 15:24:27 +0000 UTC - event for overcommit-4: {kubelet v1.21-worker2} Started: Started container overcommit-4 May 20 15:34:25.786: INFO: At 2021-05-20 15:24:27 +0000 UTC - event for overcommit-5: {multus } AddedInterface: Add eth0 [10.244.1.102/24] May 20 15:34:25.786: INFO: At 2021-05-20 15:24:27 +0000 UTC - event for overcommit-5: {kubelet v1.21-worker} Pulled: Container image "k8s.gcr.io/pause:3.4.1" already present on machine May 20 15:34:25.786: INFO: At 2021-05-20 15:24:27 +0000 UTC - event for overcommit-5: {kubelet v1.21-worker} Started: Started container overcommit-5 May 20 15:34:25.786: INFO: At 2021-05-20 15:24:27 +0000 UTC - event for overcommit-5: {kubelet v1.21-worker} Created: Created container overcommit-5 May 20 15:34:25.786: INFO: At 2021-05-20 15:24:27 +0000 UTC - event for overcommit-7: {multus } AddedInterface: Add eth0 [10.244.2.95/24] May 20 15:34:25.786: INFO: At 2021-05-20 15:24:27 +0000 UTC - event for overcommit-8: {multus } AddedInterface: Add eth0 [10.244.2.75/24] May 20 15:34:25.786: INFO: At 2021-05-20 15:24:27 +0000 UTC - event for overcommit-9: {multus } AddedInterface: Add eth0 [10.244.1.186/24] May 20 15:34:25.786: INFO: At 2021-05-20 15:24:28 +0000 UTC - event for overcommit-1: {multus } AddedInterface: Add eth0 [10.244.2.93/24] May 20 15:34:25.786: INFO: At 2021-05-20 15:24:28 +0000 UTC - event for overcommit-14: {multus } AddedInterface: Add eth0 [10.244.1.238/24] May 20 15:34:25.786: INFO: At 2021-05-20 15:24:28 +0000 UTC - event for overcommit-18: {multus } AddedInterface: Add eth0 [10.244.1.232/24] May 20 15:34:25.786: INFO: At 2021-05-20 15:24:28 +0000 UTC - event for overcommit-6: {multus } AddedInterface: Add eth0 [10.244.1.237/24] May 20 15:34:25.786: INFO: At 2021-05-20 15:28:26 +0000 UTC - event for overcommit-10: {kubelet v1.21-worker} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded May 20 15:34:25.786: INFO: At 2021-05-20 15:28:26 +0000 UTC - event for overcommit-10: {kubelet v1.21-worker} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to reserve sandbox name "overcommit-10_sched-pred-2274_e3978547-4a8d-4e13-9915-dd4673c19d05_0": name "overcommit-10_sched-pred-2274_e3978547-4a8d-4e13-9915-dd4673c19d05_0" is reserved for "e79b904d93b9c92ef62647cb320b0a0ac8d831b321cecf7aa816c1972c2ec748" May 20 15:34:25.787: INFO: At 2021-05-20 15:28:26 +0000 UTC - event for overcommit-3: {kubelet v1.21-worker2} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded May 20 15:34:25.787: INFO: At 2021-05-20 15:28:27 +0000 UTC - event for overcommit-1: {kubelet v1.21-worker2} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded May 20 15:34:25.787: INFO: At 2021-05-20 15:28:27 +0000 UTC - event for overcommit-12: {kubelet v1.21-worker2} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded May 20 15:34:25.787: INFO: At 2021-05-20 15:28:27 +0000 UTC - event for overcommit-15: {kubelet v1.21-worker} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded May 20 15:34:25.787: INFO: At 2021-05-20 15:28:27 +0000 UTC - event for overcommit-16: {kubelet v1.21-worker2} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded May 20 15:34:25.787: INFO: At 2021-05-20 15:28:27 +0000 UTC - event for overcommit-18: {kubelet v1.21-worker} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded May 20 15:34:25.787: INFO: At 2021-05-20 15:28:27 +0000 UTC - event for overcommit-19: {kubelet v1.21-worker} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded May 20 15:34:25.787: INFO: At 2021-05-20 15:28:27 +0000 UTC - event for overcommit-3: {multus } AddedInterface: Add eth0 [10.244.2.96/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:28:27 +0000 UTC - event for overcommit-7: {kubelet v1.21-worker2} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded May 20 15:34:25.787: INFO: At 2021-05-20 15:28:27 +0000 UTC - event for overcommit-8: {kubelet v1.21-worker2} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded May 20 15:34:25.787: INFO: At 2021-05-20 15:28:27 +0000 UTC - event for overcommit-9: {kubelet v1.21-worker} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded May 20 15:34:25.787: INFO: At 2021-05-20 15:28:28 +0000 UTC - event for overcommit-1: {multus } AddedInterface: Add eth0 [10.244.2.99/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:28:28 +0000 UTC - event for overcommit-12: {multus } AddedInterface: Add eth0 [10.244.2.112/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:28:28 +0000 UTC - event for overcommit-14: {kubelet v1.21-worker} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded May 20 15:34:25.787: INFO: At 2021-05-20 15:28:28 +0000 UTC - event for overcommit-15: {multus } AddedInterface: Add eth0 [10.244.1.239/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:28:28 +0000 UTC - event for overcommit-16: {multus } AddedInterface: Add eth0 [10.244.2.97/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:28:28 +0000 UTC - event for overcommit-18: {kubelet v1.21-worker} Pulled: Container image "k8s.gcr.io/pause:3.4.1" already present on machine May 20 15:34:25.787: INFO: At 2021-05-20 15:28:28 +0000 UTC - event for overcommit-18: {kubelet v1.21-worker} Created: Created container overcommit-18 May 20 15:34:25.787: INFO: At 2021-05-20 15:28:28 +0000 UTC - event for overcommit-18: {kubelet v1.21-worker} Started: Started container overcommit-18 May 20 15:34:25.787: INFO: At 2021-05-20 15:28:28 +0000 UTC - event for overcommit-18: {multus } AddedInterface: Add eth0 [10.244.1.27/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:28:28 +0000 UTC - event for overcommit-19: {multus } AddedInterface: Add eth0 [10.244.1.241/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:28:28 +0000 UTC - event for overcommit-6: {kubelet v1.21-worker} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded May 20 15:34:25.787: INFO: At 2021-05-20 15:28:28 +0000 UTC - event for overcommit-7: {multus } AddedInterface: Add eth0 [10.244.2.109/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:28:28 +0000 UTC - event for overcommit-8: {multus } AddedInterface: Add eth0 [10.244.2.100/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:28:28 +0000 UTC - event for overcommit-9: {multus } AddedInterface: Add eth0 [10.244.1.28/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:28:28 +0000 UTC - event for overcommit-9: {kubelet v1.21-worker} Created: Created container overcommit-9 May 20 15:34:25.787: INFO: At 2021-05-20 15:28:28 +0000 UTC - event for overcommit-9: {kubelet v1.21-worker} Pulled: Container image "k8s.gcr.io/pause:3.4.1" already present on machine May 20 15:34:25.787: INFO: At 2021-05-20 15:28:28 +0000 UTC - event for overcommit-9: {kubelet v1.21-worker} Started: Started container overcommit-9 May 20 15:34:25.787: INFO: At 2021-05-20 15:28:29 +0000 UTC - event for overcommit-14: {multus } AddedInterface: Add eth0 [10.244.1.58/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:28:29 +0000 UTC - event for overcommit-6: {multus } AddedInterface: Add eth0 [10.244.1.59/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:28:39 +0000 UTC - event for overcommit-10: {multus } AddedInterface: Add eth0 [10.244.1.79/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:32:27 +0000 UTC - event for overcommit-3: {multus } AddedInterface: Add eth0 [10.244.2.114/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:32:28 +0000 UTC - event for overcommit-15: {multus } AddedInterface: Add eth0 [10.244.1.82/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:32:28 +0000 UTC - event for overcommit-19: {multus } AddedInterface: Add eth0 [10.244.1.80/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:32:29 +0000 UTC - event for overcommit-1: {multus } AddedInterface: Add eth0 [10.244.2.242/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:32:29 +0000 UTC - event for overcommit-12: {multus } AddedInterface: Add eth0 [10.244.2.127/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:32:29 +0000 UTC - event for overcommit-14: {multus } AddedInterface: Add eth0 [10.244.1.86/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:32:29 +0000 UTC - event for overcommit-16: {multus } AddedInterface: Add eth0 [10.244.2.237/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:32:29 +0000 UTC - event for overcommit-6: {multus } AddedInterface: Add eth0 [10.244.1.83/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:32:29 +0000 UTC - event for overcommit-7: {multus } AddedInterface: Add eth0 [10.244.2.243/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:32:29 +0000 UTC - event for overcommit-8: {multus } AddedInterface: Add eth0 [10.244.2.118/24] May 20 15:34:25.787: INFO: At 2021-05-20 15:32:39 +0000 UTC - event for overcommit-10: {multus } AddedInterface: Add eth0 [10.244.1.88/24] May 20 15:34:25.795: INFO: POD NODE PHASE GRACE CONDITIONS May 20 15:34:25.795: INFO: overcommit-0 v1.21-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC }] May 20 15:34:25.795: INFO: overcommit-1 v1.21-worker2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-1]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-1]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC }] May 20 15:34:25.795: INFO: overcommit-10 v1.21-worker Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-10]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-10]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC }] May 20 15:34:25.795: INFO: overcommit-11 v1.21-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC }] May 20 15:34:25.795: INFO: overcommit-12 v1.21-worker2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-12]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-12]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC }] May 20 15:34:25.795: INFO: overcommit-13 v1.21-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC }] May 20 15:34:25.795: INFO: overcommit-14 v1.21-worker Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-14]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-14]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC }] May 20 15:34:25.795: INFO: overcommit-15 v1.21-worker Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-15]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-15]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC }] May 20 15:34:25.795: INFO: overcommit-16 v1.21-worker2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-16]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-16]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC }] May 20 15:34:25.795: INFO: overcommit-17 v1.21-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC }] May 20 15:34:25.795: INFO: overcommit-18 v1.21-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:28:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:28:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC }] May 20 15:34:25.795: INFO: overcommit-19 v1.21-worker Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-19]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-19]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC }] May 20 15:34:25.795: INFO: overcommit-2 v1.21-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC }] May 20 15:34:25.795: INFO: overcommit-3 v1.21-worker2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-3]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-3]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC }] May 20 15:34:25.795: INFO: overcommit-4 v1.21-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC }] May 20 15:34:25.795: INFO: overcommit-5 v1.21-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC }] May 20 15:34:25.795: INFO: overcommit-6 v1.21-worker Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-6]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-6]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC }] May 20 15:34:25.795: INFO: overcommit-7 v1.21-worker2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-7]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-7]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC }] May 20 15:34:25.795: INFO: overcommit-8 v1.21-worker2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-8]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC ContainersNotReady containers with unready status: [overcommit-8]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC }] May 20 15:34:25.795: INFO: overcommit-9 v1.21-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:28:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:28:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-05-20 15:24:25 +0000 UTC }] May 20 15:34:25.795: INFO: May 20 15:34:25.800: INFO: Logging node info for node v1.21-control-plane May 20 15:34:25.803: INFO: Node Info: &Node{ObjectMeta:{v1.21-control-plane 5b69b221-756d-4fdd-a304-8ce35376065e 947090 0 2021-05-16 10:43:52 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux ingress-ready:true kubernetes.io/arch:amd64 kubernetes.io/hostname:v1.21-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-05-16 10:43:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-05-16 10:44:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-05-16 10:45:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:ingress-ready":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v1.21/v1.21-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-20 15:32:30 +0000 UTC,LastTransitionTime:2021-05-16 10:43:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-20 15:32:30 +0000 UTC,LastTransitionTime:2021-05-16 10:43:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-20 15:32:30 +0000 UTC,LastTransitionTime:2021-05-16 10:43:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-20 15:32:30 +0000 UTC,LastTransitionTime:2021-05-16 10:44:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:v1.21-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e5338de4043b4f8baf363786955185db,SystemUUID:451ffe74-6b76-4bef-9b60-8fc2dd6e579e,BootID:be455131-27dd-43f1-b9be-d55ec4651321,KernelVersion:5.4.0-73-generic,OSImage:Ubuntu 20.10,ContainerRuntimeVersion:containerd://1.5.0-beta.4-91-g1b05b605c,KubeletVersion:v1.21.0,KubeProxyVersion:v1.21.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0],SizeBytes:126814690,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0],SizeBytes:124178601,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0],SizeBytes:121030979,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20210326-1e038dc5],SizeBytes:119981371,},ContainerImage{Names:[ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed ghcr.io/k8snetworkplumbingwg/multus-cni:stable],SizeBytes:104808100,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0],SizeBytes:51866434,},ContainerImage{Names:[docker.io/envoyproxy/envoy@sha256:55d35e368436519136dbd978fa0682c49d8ab99e4d768413510f226762b30b07 docker.io/envoyproxy/envoy:v1.18.3],SizeBytes:51364868,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42582495,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:41982521,},ContainerImage{Names:[quay.io/metallb/speaker@sha256:ec791c2e887b42cd3950632d8c1ea73337ca98414c87fe154620ed3c4e98a052 quay.io/metallb/speaker:main],SizeBytes:39298188,},ContainerImage{Names:[docker.io/projectcontour/contour@sha256:1b6849d5bda1f5b2f839dad799922a043b82debaba9fa907723b5eb4c49f2e9e docker.io/projectcontour/contour:v1.15.1],SizeBytes:11888781,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/pause:3.4.1],SizeBytes:685714,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 15:34:25.804: INFO: Logging kubelet events for node v1.21-control-plane May 20 15:34:25.807: INFO: Logging pods the kubelet thinks is on node v1.21-control-plane May 20 15:34:25.844: INFO: etcd-v1.21-control-plane started at 2021-05-16 10:43:26 +0000 UTC (0+1 container statuses recorded) May 20 15:34:25.844: INFO: Container etcd ready: true, restart count 0 May 20 15:34:25.844: INFO: kube-apiserver-v1.21-control-plane started at 2021-05-16 10:43:36 +0000 UTC (0+1 container statuses recorded) May 20 15:34:25.844: INFO: Container kube-apiserver ready: true, restart count 0 May 20 15:34:25.844: INFO: kube-multus-ds-29t4f started at 2021-05-16 10:45:26 +0000 UTC (0+1 container statuses recorded) May 20 15:34:25.844: INFO: Container kube-multus ready: true, restart count 4 May 20 15:34:25.844: INFO: kube-scheduler-v1.21-control-plane started at 2021-05-16 10:44:07 +0000 UTC (0+1 container statuses recorded) May 20 15:34:25.844: INFO: Container kube-scheduler ready: true, restart count 0 May 20 15:34:25.844: INFO: kube-proxy-jg42s started at 2021-05-16 10:44:10 +0000 UTC (0+1 container statuses recorded) May 20 15:34:25.844: INFO: Container kube-proxy ready: true, restart count 0 May 20 15:34:25.844: INFO: local-path-provisioner-78776bfc44-8c2c5 started at 2021-05-16 10:44:27 +0000 UTC (0+1 container statuses recorded) May 20 15:34:25.844: INFO: Container local-path-provisioner ready: true, restart count 0 May 20 15:34:25.844: INFO: tune-sysctls-jt9t4 started at 2021-05-16 10:45:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:25.844: INFO: Container setsysctls ready: true, restart count 0 May 20 15:34:25.844: INFO: kindnet-9lwvg started at 2021-05-16 10:44:10 +0000 UTC (0+1 container statuses recorded) May 20 15:34:25.844: INFO: Container kindnet-cni ready: true, restart count 1 May 20 15:34:25.844: INFO: speaker-w74lp started at 2021-05-16 10:45:27 +0000 UTC (0+1 container statuses recorded) May 20 15:34:25.844: INFO: Container speaker ready: true, restart count 0 May 20 15:34:25.844: INFO: kube-controller-manager-v1.21-control-plane started at 2021-05-16 10:44:07 +0000 UTC (0+1 container statuses recorded) May 20 15:34:25.844: INFO: Container kube-controller-manager ready: true, restart count 0 May 20 15:34:25.844: INFO: create-loop-devs-jmsvq started at 2021-05-16 10:45:24 +0000 UTC (0+1 container statuses recorded) May 20 15:34:25.844: INFO: Container loopdev ready: true, restart count 0 May 20 15:34:25.844: INFO: envoy-k7tkp started at 2021-05-16 10:45:29 +0000 UTC (1+2 container statuses recorded) May 20 15:34:25.844: INFO: Init container envoy-initconfig ready: true, restart count 0 May 20 15:34:25.844: INFO: Container envoy ready: true, restart count 0 May 20 15:34:25.844: INFO: Container shutdown-manager ready: true, restart count 0 W0520 15:34:25.885170 17 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 20 15:34:26.233: INFO: Latency metrics for node v1.21-control-plane May 20 15:34:26.233: INFO: Logging node info for node v1.21-worker May 20 15:34:26.579: INFO: Node Info: &Node{ObjectMeta:{v1.21-worker 71d1c8b7-99da-4c75-9f17-8e314f261aea 946628 0 2021-05-16 10:44:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v1.21-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-16 10:44:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubeadm Update v1 2021-05-16 10:44:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {e2e.test Update v1 2021-05-20 15:24:03 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2021-05-20 15:24:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v1.21/v1.21-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-20 15:29:29 +0000 UTC,LastTransitionTime:2021-05-16 10:44:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-20 15:29:29 +0000 UTC,LastTransitionTime:2021-05-16 10:44:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-20 15:29:29 +0000 UTC,LastTransitionTime:2021-05-16 10:44:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-20 15:29:29 +0000 UTC,LastTransitionTime:2021-05-16 10:44:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:v1.21-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2594582abaea40308f5491c0492929c4,SystemUUID:b58bfa33-a46a-43b7-9f3c-935bcd2bccba,BootID:be455131-27dd-43f1-b9be-d55ec4651321,KernelVersion:5.4.0-73-generic,OSImage:Ubuntu 20.10,ContainerRuntimeVersion:containerd://1.5.0-beta.4-91-g1b05b605c,KubeletVersion:v1.21.0,KubeProxyVersion:v1.21.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0],SizeBytes:126814690,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0],SizeBytes:124178601,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0],SizeBytes:121030979,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20210326-1e038dc5],SizeBytes:119981371,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed ghcr.io/k8snetworkplumbingwg/multus-cni:stable],SizeBytes:104808100,},ContainerImage{Names:[docker.io/kubernetesui/dashboard@sha256:148991563e374c83b75e8c51bca75f512d4f006ddc791e96a91f1c7420b60bd9 docker.io/kubernetesui/dashboard:v2.2.0],SizeBytes:67775224,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0],SizeBytes:51866434,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42582495,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:41982521,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:f30d057c09dda8b8c1d4e48864c2074d49b67c59856118be2134636053803d6d k8s.gcr.io/build-image/debian-iptables:buster-v1.6.0],SizeBytes:40403807,},ContainerImage{Names:[quay.io/metallb/speaker@sha256:ec791c2e887b42cd3950632d8c1ea73337ca98414c87fe154620ed3c4e98a052 quay.io/metallb/speaker:main],SizeBytes:39298188,},ContainerImage{Names:[quay.io/metallb/controller@sha256:9926956e63aa3d11377a9ce1c2db53240024a456dc730d1bd112d3c035f4e560 quay.io/metallb/controller:main],SizeBytes:35984712,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 docker.io/kubernetesui/metrics-scraper:v1.0.6],SizeBytes:15079854,},ContainerImage{Names:[docker.io/projectcontour/contour@sha256:1b6849d5bda1f5b2f839dad799922a043b82debaba9fa907723b5eb4c49f2e9e docker.io/projectcontour/contour:v1.15.1],SizeBytes:11888781,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.4.1],SizeBytes:685714,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 15:34:26.580: INFO: Logging kubelet events for node v1.21-worker May 20 15:34:26.583: INFO: Logging pods the kubelet thinks is on node v1.21-worker May 20 15:34:26.620: INFO: kube-proxy-42vmb started at 2021-05-16 10:44:23 +0000 UTC (0+1 container statuses recorded) May 20 15:34:26.620: INFO: Container kube-proxy ready: true, restart count 0 May 20 15:34:26.620: INFO: overcommit-19 started at 2021-05-20 15:24:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:26.620: INFO: Container overcommit-19 ready: false, restart count 0 May 20 15:34:26.620: INFO: contour-74948c9879-cqqjf started at 2021-05-20 14:04:28 +0000 UTC (0+1 container statuses recorded) May 20 15:34:26.620: INFO: Container contour ready: true, restart count 0 May 20 15:34:26.620: INFO: overcommit-18 started at 2021-05-20 15:24:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:26.620: INFO: Container overcommit-18 ready: true, restart count 0 May 20 15:34:26.620: INFO: controller-675995489c-xnj8v started at 2021-05-20 14:04:28 +0000 UTC (0+1 container statuses recorded) May 20 15:34:26.620: INFO: Container controller ready: true, restart count 0 May 20 15:34:26.620: INFO: overcommit-14 started at 2021-05-20 15:24:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:26.620: INFO: Container overcommit-14 ready: false, restart count 0 May 20 15:34:26.620: INFO: overcommit-11 started at 2021-05-20 15:24:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:26.620: INFO: Container overcommit-11 ready: true, restart count 0 May 20 15:34:26.620: INFO: overcommit-17 started at 2021-05-20 15:24:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:26.620: INFO: Container overcommit-17 ready: true, restart count 0 May 20 15:34:26.620: INFO: overcommit-5 started at 2021-05-20 15:24:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:26.620: INFO: Container overcommit-5 ready: true, restart count 0 May 20 15:34:26.620: INFO: kindnet-2qtxh started at 2021-05-16 10:44:23 +0000 UTC (0+1 container statuses recorded) May 20 15:34:26.620: INFO: Container kindnet-cni ready: true, restart count 1 May 20 15:34:26.620: INFO: tune-sysctls-jcgnq started at 2021-05-16 10:45:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:26.620: INFO: Container setsysctls ready: true, restart count 0 May 20 15:34:26.620: INFO: dashboard-metrics-scraper-856586f554-75x2x started at 2021-05-16 10:45:29 +0000 UTC (0+1 container statuses recorded) May 20 15:34:26.620: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 20 15:34:26.620: INFO: overcommit-6 started at 2021-05-20 15:24:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:26.620: INFO: Container overcommit-6 ready: false, restart count 0 May 20 15:34:26.620: INFO: kube-multus-ds-xst78 started at 2021-05-16 10:45:26 +0000 UTC (0+1 container statuses recorded) May 20 15:34:26.620: INFO: Container kube-multus ready: true, restart count 0 May 20 15:34:26.620: INFO: kubernetes-dashboard-78c79f97b4-w25tg started at 2021-05-20 14:04:28 +0000 UTC (0+1 container statuses recorded) May 20 15:34:26.620: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 20 15:34:26.620: INFO: overcommit-9 started at 2021-05-20 15:24:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:26.621: INFO: Container overcommit-9 ready: true, restart count 0 May 20 15:34:26.621: INFO: overcommit-10 started at 2021-05-20 15:24:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:26.621: INFO: Container overcommit-10 ready: false, restart count 0 May 20 15:34:26.621: INFO: overcommit-15 started at 2021-05-20 15:24:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:26.621: INFO: Container overcommit-15 ready: false, restart count 0 May 20 15:34:26.621: INFO: create-loop-devs-965k2 started at 2021-05-16 10:45:24 +0000 UTC (0+1 container statuses recorded) May 20 15:34:26.621: INFO: Container loopdev ready: true, restart count 0 May 20 15:34:26.621: INFO: speaker-g5b8b started at 2021-05-16 10:45:27 +0000 UTC (0+1 container statuses recorded) May 20 15:34:26.621: INFO: Container speaker ready: true, restart count 0 May 20 15:34:26.621: INFO: contour-74948c9879-8866g started at 2021-05-16 10:45:29 +0000 UTC (0+1 container statuses recorded) May 20 15:34:26.621: INFO: Container contour ready: true, restart count 0 W0520 15:34:26.791093 17 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 20 15:34:27.062: INFO: Latency metrics for node v1.21-worker May 20 15:34:27.062: INFO: Logging node info for node v1.21-worker2 May 20 15:34:27.083: INFO: Node Info: &Node{ObjectMeta:{v1.21-worker2 1a13bfbe-436a-4963-a58b-f2f7c83a464b 946627 0 2021-05-16 10:44:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:v1.21-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-05-16 10:44:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubeadm Update v1 2021-05-16 10:44:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {e2e.test Update v1 2021-05-20 15:24:03 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}}} {kubelet Update v1 2021-05-20 15:24:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:allocatable":{"f:example.com/fakePTSRes":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/v1.21/v1.21-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{88 0} {} 88 DecimalSI},ephemeral-storage: {{470632488960 0} {} BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{67430219776 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-05-20 15:29:29 +0000 UTC,LastTransitionTime:2021-05-16 10:44:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-05-20 15:29:29 +0000 UTC,LastTransitionTime:2021-05-16 10:44:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-05-20 15:29:29 +0000 UTC,LastTransitionTime:2021-05-16 10:44:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-05-20 15:29:29 +0000 UTC,LastTransitionTime:2021-05-16 10:44:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:v1.21-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:b58c5a31a9314d5e97265d48cbd520ba,SystemUUID:a5e091f4-9595-401f-bafb-28bb18b05e99,BootID:be455131-27dd-43f1-b9be-d55ec4651321,KernelVersion:5.4.0-73-generic,OSImage:Ubuntu 20.10,ContainerRuntimeVersion:containerd://1.5.0-beta.4-91-g1b05b605c,KubeletVersion:v1.21.0,KubeProxyVersion:v1.21.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0],SizeBytes:126814690,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0],SizeBytes:124178601,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0],SizeBytes:121030979,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20210326-1e038dc5],SizeBytes:119981371,},ContainerImage{Names:[ghcr.io/k8snetworkplumbingwg/multus-cni@sha256:e72aa733faf24d1f62b69ded1126b9b8da0144d35c8a410c4fa0a860006f9eed ghcr.io/k8snetworkplumbingwg/multus-cni:stable],SizeBytes:104808100,},ContainerImage{Names:[docker.io/kubernetesui/dashboard@sha256:148991563e374c83b75e8c51bca75f512d4f006ddc791e96a91f1c7420b60bd9 docker.io/kubernetesui/dashboard:v2.2.0],SizeBytes:67775224,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0],SizeBytes:51866434,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42582495,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:41982521,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[quay.io/metallb/speaker@sha256:ec791c2e887b42cd3950632d8c1ea73337ca98414c87fe154620ed3c4e98a052 quay.io/metallb/speaker:main],SizeBytes:39298188,},ContainerImage{Names:[quay.io/metallb/controller@sha256:9926956e63aa3d11377a9ce1c2db53240024a456dc730d1bd112d3c035f4e560 quay.io/metallb/controller:main],SizeBytes:35984712,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/projectcontour/contour@sha256:1b6849d5bda1f5b2f839dad799922a043b82debaba9fa907723b5eb4c49f2e9e docker.io/projectcontour/contour:v1.15.1],SizeBytes:11888781,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:66790a2b79e1ea3e1dabac43990c54aca5d1ddf268d9a5a0285e4167c8b24475 docker.io/library/alpine:3.6],SizeBytes:2021226,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.4.1],SizeBytes:685714,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 20 15:34:27.084: INFO: Logging kubelet events for node v1.21-worker2 May 20 15:34:27.087: INFO: Logging pods the kubelet thinks is on node v1.21-worker2 May 20 15:34:27.117: INFO: overcommit-13 started at 2021-05-20 15:24:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:27.117: INFO: Container overcommit-13 ready: true, restart count 0 May 20 15:34:27.117: INFO: kube-multus-ds-4pmk4 started at 2021-05-20 14:04:43 +0000 UTC (0+1 container statuses recorded) May 20 15:34:27.117: INFO: Container kube-multus ready: true, restart count 0 May 20 15:34:27.117: INFO: overcommit-4 started at 2021-05-20 15:24:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:27.117: INFO: Container overcommit-4 ready: true, restart count 0 May 20 15:34:27.117: INFO: overcommit-8 started at 2021-05-20 15:24:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:27.117: INFO: Container overcommit-8 ready: false, restart count 0 May 20 15:34:27.117: INFO: create-loop-devs-jq69v started at 2021-05-20 14:05:01 +0000 UTC (0+1 container statuses recorded) May 20 15:34:27.117: INFO: Container loopdev ready: true, restart count 0 May 20 15:34:27.117: INFO: coredns-558bd4d5db-xg8b5 started at 2021-05-20 14:31:04 +0000 UTC (0+1 container statuses recorded) May 20 15:34:27.117: INFO: Container coredns ready: true, restart count 0 May 20 15:34:27.117: INFO: overcommit-16 started at 2021-05-20 15:24:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:27.117: INFO: Container overcommit-16 ready: false, restart count 0 May 20 15:34:27.117: INFO: overcommit-3 started at 2021-05-20 15:24:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:27.117: INFO: Container overcommit-3 ready: false, restart count 0 May 20 15:34:27.117: INFO: speaker-67fwk started at 2021-05-20 14:04:33 +0000 UTC (0+1 container statuses recorded) May 20 15:34:27.117: INFO: Container speaker ready: true, restart count 0 May 20 15:34:27.117: INFO: overcommit-1 started at 2021-05-20 15:24:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:27.117: INFO: Container overcommit-1 ready: false, restart count 0 May 20 15:34:27.117: INFO: overcommit-2 started at 2021-05-20 15:24:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:27.117: INFO: Container overcommit-2 ready: true, restart count 0 May 20 15:34:27.117: INFO: kindnet-xkwvl started at 2021-05-16 10:44:23 +0000 UTC (0+1 container statuses recorded) May 20 15:34:27.117: INFO: Container kindnet-cni ready: true, restart count 1 May 20 15:34:27.117: INFO: overcommit-12 started at 2021-05-20 15:24:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:27.117: INFO: Container overcommit-12 ready: false, restart count 0 May 20 15:34:27.117: INFO: overcommit-7 started at 2021-05-20 15:24:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:27.117: INFO: Container overcommit-7 ready: false, restart count 0 May 20 15:34:27.117: INFO: overcommit-0 started at 2021-05-20 15:24:25 +0000 UTC (0+1 container statuses recorded) May 20 15:34:27.117: INFO: Container overcommit-0 ready: true, restart count 0 May 20 15:34:27.117: INFO: tune-sysctls-pgxh4 started at 2021-05-20 14:04:33 +0000 UTC (0+1 container statuses recorded) May 20 15:34:27.117: INFO: Container setsysctls ready: true, restart count 0 May 20 15:34:27.117: INFO: kube-proxy-gh4rd started at 2021-05-16 10:44:23 +0000 UTC (0+1 container statuses recorded) May 20 15:34:27.117: INFO: Container kube-proxy ready: true, restart count 0 May 20 15:34:27.117: INFO: coredns-558bd4d5db-r5ppk started at 2021-05-20 14:31:04 +0000 UTC (0+1 container statuses recorded) May 20 15:34:27.117: INFO: Container coredns ready: true, restart count 0 W0520 15:34:27.127212 17 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. May 20 15:34:27.354: INFO: Latency metrics for node v1.21-worker2 May 20 15:34:27.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2274" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • Failure [601.950 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 May 20 15:34:25.717: Unexpected error: <*errors.errorString | 0xc00430a090>: { s: "Error waiting for 20 pods to be running - probably a timeout: Timeout while waiting for pods with labels \"startPodsID=922928b8-ab1e-4f13-bf5d-716cbd8d6bd9\" to be running", } Error waiting for 20 pods to be running - probably a timeout: Timeout while waiting for pods with labels "startPodsID=922928b8-ab1e-4f13-bf5d-716cbd8d6bd9" to be running occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:165 ------------------------------ {"msg":"FAILED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":13,"completed":1,"skipped":310,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:327 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 15:34:27.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:154 May 20 15:34:27.781: INFO: Waiting up to 1m0s for all nodes to be ready May 20 15:35:27.832: INFO: Waiting for terminating namespaces to be deleted... May 20 15:35:27.834: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 20 15:35:27.848: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 20 15:35:27.848: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 20 15:35:27.860: INFO: ComputeCPUMemFraction for node: v1.21-worker May 20 15:35:27.860: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.860: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.860: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.860: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.860: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.860: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.860: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.860: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.860: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.860: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.860: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.860: INFO: Node: v1.21-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 20 15:35:27.860: INFO: Node: v1.21-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 20 15:35:27.860: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 20 15:35:27.860: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.860: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.860: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.860: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.860: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.860: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.860: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.860: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.860: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 20 15:35:27.860: INFO: Node: v1.21-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:327 May 20 15:35:27.876: INFO: ComputeCPUMemFraction for node: v1.21-worker May 20 15:35:27.876: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.876: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.876: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.876: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.876: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.876: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.876: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.876: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.876: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.876: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.876: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.876: INFO: Node: v1.21-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 20 15:35:27.876: INFO: Node: v1.21-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 20 15:35:27.876: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 20 15:35:27.876: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.876: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.876: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.876: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.876: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.876: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.876: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.876: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:35:27.877: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 20 15:35:27.877: INFO: Node: v1.21-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 20 15:35:27.888: INFO: Waiting for running... May 20 15:35:32.946: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 20 15:35:38.017: INFO: ComputeCPUMemFraction for node: v1.21-worker May 20 15:35:38.017: INFO: Pod for on the node: 6a7bf510-3f1c-4297-ae5b-f434f49736ae-0, Cpu: 43900, Mem: 33622835200 May 20 15:35:38.017: INFO: Pod for on the node: 6a7bf510-3f1c-4297-ae5b-f434f49736ae-0, Cpu: 43900, Mem: 33622835200 May 20 15:35:38.017: INFO: Pod for on the node: 6a7bf510-3f1c-4297-ae5b-f434f49736ae-0, Cpu: 43900, Mem: 33622835200 May 20 15:35:38.017: INFO: Pod for on the node: 6a7bf510-3f1c-4297-ae5b-f434f49736ae-0, Cpu: 43900, Mem: 33622835200 May 20 15:35:38.017: INFO: Pod for on the node: 6a7bf510-3f1c-4297-ae5b-f434f49736ae-0, Cpu: 43900, Mem: 33622835200 May 20 15:35:38.017: INFO: Pod for on the node: 6a7bf510-3f1c-4297-ae5b-f434f49736ae-0, Cpu: 43900, Mem: 33622835200 May 20 15:35:38.017: INFO: Pod for on the node: 6a7bf510-3f1c-4297-ae5b-f434f49736ae-0, Cpu: 43900, Mem: 33622835200 May 20 15:35:38.017: INFO: Pod for on the node: 6a7bf510-3f1c-4297-ae5b-f434f49736ae-0, Cpu: 43900, Mem: 33622835200 May 20 15:35:38.017: INFO: Pod for on the node: 6a7bf510-3f1c-4297-ae5b-f434f49736ae-0, Cpu: 43900, Mem: 33622835200 May 20 15:35:38.017: INFO: Pod for on the node: 6a7bf510-3f1c-4297-ae5b-f434f49736ae-0, Cpu: 43900, Mem: 33622835200 May 20 15:35:38.017: INFO: Pod for on the node: 6a7bf510-3f1c-4297-ae5b-f434f49736ae-0, Cpu: 43900, Mem: 33622835200 May 20 15:35:38.017: INFO: Pod for on the node: 6a7bf510-3f1c-4297-ae5b-f434f49736ae-0, Cpu: 43900, Mem: 33622835200 May 20 15:35:38.018: INFO: Node: v1.21-worker, totalRequestedCPUResource: 526900, cpuAllocatableMil: 88000, cpuFraction: 1 May 20 15:35:38.018: INFO: Node: v1.21-worker, totalRequestedMemResource: 403578880000, memAllocatableVal: 67430219776, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 20 15:35:38.018: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 20 15:35:38.018: INFO: Pod for on the node: 6a7bf510-3f1c-4297-ae5b-f434f49736ae-0, Cpu: 43900, Mem: 33622835200 May 20 15:35:38.018: INFO: Pod for on the node: 6a7bf510-3f1c-4297-ae5b-f434f49736ae-0, Cpu: 43900, Mem: 33622835200 May 20 15:35:38.018: INFO: Pod for on the node: 6a7bf510-3f1c-4297-ae5b-f434f49736ae-0, Cpu: 43900, Mem: 33622835200 May 20 15:35:38.018: INFO: Pod for on the node: 6a7bf510-3f1c-4297-ae5b-f434f49736ae-0, Cpu: 43900, Mem: 33622835200 May 20 15:35:38.018: INFO: Pod for on the node: 6a7bf510-3f1c-4297-ae5b-f434f49736ae-0, Cpu: 43900, Mem: 33622835200 May 20 15:35:38.018: INFO: Pod for on the node: 6a7bf510-3f1c-4297-ae5b-f434f49736ae-0, Cpu: 43900, Mem: 33622835200 May 20 15:35:38.018: INFO: Pod for on the node: 6a7bf510-3f1c-4297-ae5b-f434f49736ae-0, Cpu: 43900, Mem: 33622835200 May 20 15:35:38.018: INFO: Pod for on the node: 6a7bf510-3f1c-4297-ae5b-f434f49736ae-0, Cpu: 43900, Mem: 33622835200 May 20 15:35:38.018: INFO: Pod for on the node: 6a7bf510-3f1c-4297-ae5b-f434f49736ae-0, Cpu: 43900, Mem: 33622835200 May 20 15:35:38.018: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 395200, cpuAllocatableMil: 88000, cpuFraction: 1 May 20 15:35:38.018: INFO: Node: v1.21-worker2, totalRequestedMemResource: 302710374400, memAllocatableVal: 67430219776, memFraction: 1 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4b5685f2-6ae6-4884-af5f=testing-taint-value-5148e0eb-3162-45ad-989b-882a4b27db54:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-3428b54b-4387-4ae4-be3f=testing-taint-value-29cff0e6-98f3-4f44-b277-e32d7de0fd65:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b454a6cc-d01d-43dd-abd0=testing-taint-value-b75b870d-87cb-4ae1-a8bb-9b29da0c838b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-8fe2830a-43fd-41cc-bc8f=testing-taint-value-46d3bbad-1d1e-4cf4-b76b-32587bde8911:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-734ffc7b-f206-41ed-a239=testing-taint-value-f655bdb6-afbb-4396-8021-c4797ec7bd76:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ae2935e3-4e25-45d3-85be=testing-taint-value-c7663b97-a490-4440-b054-e004c301cd75:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-6655c50c-6fc8-426a-9363=testing-taint-value-bb80c9ff-4631-4624-bc75-c1ff5f8881ba:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-082ccb9d-2079-4311-a459=testing-taint-value-882ed8b1-f873-4d68-ad88-5757ae86204e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4f111807-3cb3-47a7-8489=testing-taint-value-95064aee-033d-43c2-a3dd-3aff48dcc8c9:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b78935e4-df56-4780-808e=testing-taint-value-730fed61-24fa-4116-a510-3d68167425d9:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-369d01e0-c928-4994-8823=testing-taint-value-ea66b692-4941-472e-a563-b16a0a6bccd6:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c093786b-bece-4cbb-a7a1=testing-taint-value-f919c1d7-0309-476d-8ce3-f36af655a2b6:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b13adc97-cac9-4c4b-b861=testing-taint-value-31c78081-cb38-4738-b3ec-393cd09155eb:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-9c2cf232-5ce4-4147-83c4=testing-taint-value-0080ff25-2ed1-44ca-a0e5-9e243af3e569:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-dbaa5701-9463-4320-bb5e=testing-taint-value-c542d839-00b3-4318-9fb2-ed1fa1f90f36:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-79c3f10e-af6d-4209-92ec=testing-taint-value-b4aeb18d-0068-4d45-bf40-92bf4884a6e1:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-0a114ced-a899-43b4-ae0a=testing-taint-value-419019af-3e87-4016-9b16-bc44831d98eb:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-24dddb10-6948-4663-bc42=testing-taint-value-6014d1ae-c14c-492b-b84f-2125073d3a5f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-f1523588-729e-4298-bc6a=testing-taint-value-51f36377-6151-4d62-a896-c1d30fadc325:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-a1ae04af-4c7d-4669-9ba6=testing-taint-value-164e98d4-35a3-41d4-8fa3-4425baf1ae6d:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-369d01e0-c928-4994-8823=testing-taint-value-ea66b692-4941-472e-a563-b16a0a6bccd6:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c093786b-bece-4cbb-a7a1=testing-taint-value-f919c1d7-0309-476d-8ce3-f36af655a2b6:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b13adc97-cac9-4c4b-b861=testing-taint-value-31c78081-cb38-4738-b3ec-393cd09155eb:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-9c2cf232-5ce4-4147-83c4=testing-taint-value-0080ff25-2ed1-44ca-a0e5-9e243af3e569:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-dbaa5701-9463-4320-bb5e=testing-taint-value-c542d839-00b3-4318-9fb2-ed1fa1f90f36:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-79c3f10e-af6d-4209-92ec=testing-taint-value-b4aeb18d-0068-4d45-bf40-92bf4884a6e1:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-0a114ced-a899-43b4-ae0a=testing-taint-value-419019af-3e87-4016-9b16-bc44831d98eb:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-24dddb10-6948-4663-bc42=testing-taint-value-6014d1ae-c14c-492b-b84f-2125073d3a5f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-f1523588-729e-4298-bc6a=testing-taint-value-51f36377-6151-4d62-a896-c1d30fadc325:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-a1ae04af-4c7d-4669-9ba6=testing-taint-value-164e98d4-35a3-41d4-8fa3-4425baf1ae6d:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4b5685f2-6ae6-4884-af5f=testing-taint-value-5148e0eb-3162-45ad-989b-882a4b27db54:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-3428b54b-4387-4ae4-be3f=testing-taint-value-29cff0e6-98f3-4f44-b277-e32d7de0fd65:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b454a6cc-d01d-43dd-abd0=testing-taint-value-b75b870d-87cb-4ae1-a8bb-9b29da0c838b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-8fe2830a-43fd-41cc-bc8f=testing-taint-value-46d3bbad-1d1e-4cf4-b76b-32587bde8911:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-734ffc7b-f206-41ed-a239=testing-taint-value-f655bdb6-afbb-4396-8021-c4797ec7bd76:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ae2935e3-4e25-45d3-85be=testing-taint-value-c7663b97-a490-4440-b054-e004c301cd75:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-6655c50c-6fc8-426a-9363=testing-taint-value-bb80c9ff-4631-4624-bc75-c1ff5f8881ba:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-082ccb9d-2079-4311-a459=testing-taint-value-882ed8b1-f873-4d68-ad88-5757ae86204e:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4f111807-3cb3-47a7-8489=testing-taint-value-95064aee-033d-43c2-a3dd-3aff48dcc8c9:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b78935e4-df56-4780-808e=testing-taint-value-730fed61-24fa-4116-a510-3d68167425d9:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 15:35:53.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-5378" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:151 • [SLOW TEST:86.562 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:327 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":13,"completed":2,"skipped":348,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 15:35:53.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 15:35:53.965: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 15:35:53.973: INFO: Waiting for terminating namespaces to be deleted... May 20 15:35:53.976: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 20 15:35:53.986: INFO: create-loop-devs-965k2 from kube-system started at 2021-05-16 10:45:24 +0000 UTC (1 container statuses recorded) May 20 15:35:53.986: INFO: Container loopdev ready: true, restart count 0 May 20 15:35:53.986: INFO: kindnet-2qtxh from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:35:53.986: INFO: Container kindnet-cni ready: true, restart count 1 May 20 15:35:53.986: INFO: kube-multus-ds-xst78 from kube-system started at 2021-05-16 10:45:26 +0000 UTC (1 container statuses recorded) May 20 15:35:53.986: INFO: Container kube-multus ready: true, restart count 0 May 20 15:35:53.986: INFO: kube-proxy-42vmb from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:35:53.986: INFO: Container kube-proxy ready: true, restart count 0 May 20 15:35:53.986: INFO: tune-sysctls-jcgnq from kube-system started at 2021-05-16 10:45:25 +0000 UTC (1 container statuses recorded) May 20 15:35:53.986: INFO: Container setsysctls ready: true, restart count 0 May 20 15:35:53.986: INFO: dashboard-metrics-scraper-856586f554-75x2x from kubernetes-dashboard started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 15:35:53.986: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 20 15:35:53.986: INFO: kubernetes-dashboard-78c79f97b4-w25tg from kubernetes-dashboard started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:35:53.986: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 20 15:35:53.986: INFO: controller-675995489c-xnj8v from metallb-system started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:35:53.986: INFO: Container controller ready: true, restart count 0 May 20 15:35:53.986: INFO: speaker-g5b8b from metallb-system started at 2021-05-16 10:45:27 +0000 UTC (1 container statuses recorded) May 20 15:35:53.986: INFO: Container speaker ready: true, restart count 0 May 20 15:35:53.986: INFO: contour-74948c9879-8866g from projectcontour started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 15:35:53.986: INFO: Container contour ready: true, restart count 0 May 20 15:35:53.986: INFO: contour-74948c9879-cqqjf from projectcontour started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:35:53.986: INFO: Container contour ready: true, restart count 0 May 20 15:35:53.986: INFO: with-tolerations from sched-priority-5378 started at 2021-05-20 15:35:38 +0000 UTC (1 container statuses recorded) May 20 15:35:53.986: INFO: Container with-tolerations ready: true, restart count 0 May 20 15:35:53.986: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 20 15:35:53.994: INFO: coredns-558bd4d5db-r5ppk from kube-system started at 2021-05-20 14:31:04 +0000 UTC (1 container statuses recorded) May 20 15:35:53.994: INFO: Container coredns ready: true, restart count 0 May 20 15:35:53.994: INFO: coredns-558bd4d5db-xg8b5 from kube-system started at 2021-05-20 14:31:04 +0000 UTC (1 container statuses recorded) May 20 15:35:53.994: INFO: Container coredns ready: true, restart count 0 May 20 15:35:53.994: INFO: create-loop-devs-jq69v from kube-system started at 2021-05-20 14:05:01 +0000 UTC (1 container statuses recorded) May 20 15:35:53.994: INFO: Container loopdev ready: true, restart count 0 May 20 15:35:53.994: INFO: kindnet-xkwvl from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:35:53.994: INFO: Container kindnet-cni ready: true, restart count 1 May 20 15:35:53.994: INFO: kube-multus-ds-4pmk4 from kube-system started at 2021-05-20 14:04:43 +0000 UTC (1 container statuses recorded) May 20 15:35:53.994: INFO: Container kube-multus ready: true, restart count 0 May 20 15:35:53.994: INFO: kube-proxy-gh4rd from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:35:53.994: INFO: Container kube-proxy ready: true, restart count 0 May 20 15:35:53.994: INFO: tune-sysctls-pgxh4 from kube-system started at 2021-05-20 14:04:33 +0000 UTC (1 container statuses recorded) May 20 15:35:53.994: INFO: Container setsysctls ready: true, restart count 0 May 20 15:35:53.994: INFO: speaker-67fwk from metallb-system started at 2021-05-20 14:04:33 +0000 UTC (1 container statuses recorded) May 20 15:35:53.994: INFO: Container speaker ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node v1.21-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node v1.21-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 15:36:02.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6603" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.200 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":13,"completed":3,"skipped":525,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 15:36:02.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 15:36:02.171: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 15:36:02.178: INFO: Waiting for terminating namespaces to be deleted... May 20 15:36:02.181: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 20 15:36:02.190: INFO: create-loop-devs-965k2 from kube-system started at 2021-05-16 10:45:24 +0000 UTC (1 container statuses recorded) May 20 15:36:02.190: INFO: Container loopdev ready: true, restart count 0 May 20 15:36:02.190: INFO: kindnet-2qtxh from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:36:02.190: INFO: Container kindnet-cni ready: true, restart count 1 May 20 15:36:02.190: INFO: kube-multus-ds-xst78 from kube-system started at 2021-05-16 10:45:26 +0000 UTC (1 container statuses recorded) May 20 15:36:02.190: INFO: Container kube-multus ready: true, restart count 0 May 20 15:36:02.190: INFO: kube-proxy-42vmb from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:36:02.190: INFO: Container kube-proxy ready: true, restart count 0 May 20 15:36:02.190: INFO: tune-sysctls-jcgnq from kube-system started at 2021-05-16 10:45:25 +0000 UTC (1 container statuses recorded) May 20 15:36:02.190: INFO: Container setsysctls ready: true, restart count 0 May 20 15:36:02.190: INFO: dashboard-metrics-scraper-856586f554-75x2x from kubernetes-dashboard started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 15:36:02.190: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 20 15:36:02.190: INFO: kubernetes-dashboard-78c79f97b4-w25tg from kubernetes-dashboard started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:36:02.190: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 20 15:36:02.190: INFO: controller-675995489c-xnj8v from metallb-system started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:36:02.190: INFO: Container controller ready: true, restart count 0 May 20 15:36:02.190: INFO: speaker-g5b8b from metallb-system started at 2021-05-16 10:45:27 +0000 UTC (1 container statuses recorded) May 20 15:36:02.190: INFO: Container speaker ready: true, restart count 0 May 20 15:36:02.190: INFO: contour-74948c9879-8866g from projectcontour started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 15:36:02.190: INFO: Container contour ready: true, restart count 0 May 20 15:36:02.190: INFO: contour-74948c9879-cqqjf from projectcontour started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:36:02.190: INFO: Container contour ready: true, restart count 0 May 20 15:36:02.190: INFO: rs-e2e-pts-filter-j4sgs from sched-pred-6603 started at 2021-05-20 15:35:58 +0000 UTC (1 container statuses recorded) May 20 15:36:02.190: INFO: Container e2e-pts-filter ready: true, restart count 0 May 20 15:36:02.190: INFO: rs-e2e-pts-filter-scc4d from sched-pred-6603 started at 2021-05-20 15:35:58 +0000 UTC (1 container statuses recorded) May 20 15:36:02.190: INFO: Container e2e-pts-filter ready: true, restart count 0 May 20 15:36:02.190: INFO: with-tolerations from sched-priority-5378 started at 2021-05-20 15:35:38 +0000 UTC (1 container statuses recorded) May 20 15:36:02.190: INFO: Container with-tolerations ready: false, restart count 0 May 20 15:36:02.190: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 20 15:36:02.199: INFO: coredns-558bd4d5db-r5ppk from kube-system started at 2021-05-20 14:31:04 +0000 UTC (1 container statuses recorded) May 20 15:36:02.200: INFO: Container coredns ready: true, restart count 0 May 20 15:36:02.200: INFO: coredns-558bd4d5db-xg8b5 from kube-system started at 2021-05-20 14:31:04 +0000 UTC (1 container statuses recorded) May 20 15:36:02.200: INFO: Container coredns ready: true, restart count 0 May 20 15:36:02.200: INFO: create-loop-devs-jq69v from kube-system started at 2021-05-20 14:05:01 +0000 UTC (1 container statuses recorded) May 20 15:36:02.200: INFO: Container loopdev ready: true, restart count 0 May 20 15:36:02.200: INFO: kindnet-xkwvl from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:36:02.200: INFO: Container kindnet-cni ready: true, restart count 1 May 20 15:36:02.200: INFO: kube-multus-ds-4pmk4 from kube-system started at 2021-05-20 14:04:43 +0000 UTC (1 container statuses recorded) May 20 15:36:02.200: INFO: Container kube-multus ready: true, restart count 0 May 20 15:36:02.200: INFO: kube-proxy-gh4rd from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:36:02.200: INFO: Container kube-proxy ready: true, restart count 0 May 20 15:36:02.200: INFO: tune-sysctls-pgxh4 from kube-system started at 2021-05-20 14:04:33 +0000 UTC (1 container statuses recorded) May 20 15:36:02.200: INFO: Container setsysctls ready: true, restart count 0 May 20 15:36:02.200: INFO: speaker-67fwk from metallb-system started at 2021-05-20 14:04:33 +0000 UTC (1 container statuses recorded) May 20 15:36:02.200: INFO: Container speaker ready: true, restart count 0 May 20 15:36:02.200: INFO: rs-e2e-pts-filter-497h7 from sched-pred-6603 started at 2021-05-20 15:35:58 +0000 UTC (1 container statuses recorded) May 20 15:36:02.200: INFO: Container e2e-pts-filter ready: true, restart count 0 May 20 15:36:02.200: INFO: rs-e2e-pts-filter-dgchp from sched-pred-6603 started at 2021-05-20 15:35:58 +0000 UTC (1 container statuses recorded) May 20 15:36:02.200: INFO: Container e2e-pts-filter ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-1ae9fcab-bdad-4974-a89e-0c805e3ff3dc 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.18.0.4 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.18.0.4 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-1ae9fcab-bdad-4974-a89e-0c805e3ff3dc off the node v1.21-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-1ae9fcab-bdad-4974-a89e-0c805e3ff3dc [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 15:36:10.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1147" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.163 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","total":13,"completed":4,"skipped":1293,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:263 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 15:36:10.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:154 May 20 15:36:10.357: INFO: Waiting up to 1m0s for all nodes to be ready May 20 15:37:10.400: INFO: Waiting for terminating namespaces to be deleted... May 20 15:37:10.403: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 20 15:37:10.418: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 20 15:37:10.418: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 20 15:37:10.431: INFO: ComputeCPUMemFraction for node: v1.21-worker May 20 15:37:10.431: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.431: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.431: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.431: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.431: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.431: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.431: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.431: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.431: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.431: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.431: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.431: INFO: Node: v1.21-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 20 15:37:10.431: INFO: Node: v1.21-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 20 15:37:10.431: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 20 15:37:10.431: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.431: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.431: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.431: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.431: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.431: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.431: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.431: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.431: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 20 15:37:10.431: INFO: Node: v1.21-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:263 May 20 15:37:10.444: INFO: ComputeCPUMemFraction for node: v1.21-worker May 20 15:37:10.444: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.444: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.444: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.444: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.444: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.444: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.444: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.444: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.444: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.444: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.444: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.444: INFO: Node: v1.21-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 20 15:37:10.444: INFO: Node: v1.21-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 20 15:37:10.444: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 20 15:37:10.444: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.444: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.444: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.444: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.444: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.444: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.444: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.444: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:37:10.444: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 20 15:37:10.444: INFO: Node: v1.21-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 20 15:37:10.455: INFO: Waiting for running... May 20 15:37:15.511: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 20 15:37:20.580: INFO: ComputeCPUMemFraction for node: v1.21-worker May 20 15:37:20.580: INFO: Pod for on the node: cbdf5d3e-2e6e-4dfd-a2aa-b82dccfb2c3c-0, Cpu: 43900, Mem: 33622835200 May 20 15:37:20.580: INFO: Pod for on the node: cbdf5d3e-2e6e-4dfd-a2aa-b82dccfb2c3c-0, Cpu: 43900, Mem: 33622835200 May 20 15:37:20.580: INFO: Pod for on the node: cbdf5d3e-2e6e-4dfd-a2aa-b82dccfb2c3c-0, Cpu: 43900, Mem: 33622835200 May 20 15:37:20.580: INFO: Pod for on the node: cbdf5d3e-2e6e-4dfd-a2aa-b82dccfb2c3c-0, Cpu: 43900, Mem: 33622835200 May 20 15:37:20.581: INFO: Pod for on the node: cbdf5d3e-2e6e-4dfd-a2aa-b82dccfb2c3c-0, Cpu: 43900, Mem: 33622835200 May 20 15:37:20.581: INFO: Pod for on the node: cbdf5d3e-2e6e-4dfd-a2aa-b82dccfb2c3c-0, Cpu: 43900, Mem: 33622835200 May 20 15:37:20.581: INFO: Pod for on the node: cbdf5d3e-2e6e-4dfd-a2aa-b82dccfb2c3c-0, Cpu: 43900, Mem: 33622835200 May 20 15:37:20.581: INFO: Pod for on the node: cbdf5d3e-2e6e-4dfd-a2aa-b82dccfb2c3c-0, Cpu: 43900, Mem: 33622835200 May 20 15:37:20.581: INFO: Pod for on the node: cbdf5d3e-2e6e-4dfd-a2aa-b82dccfb2c3c-0, Cpu: 43900, Mem: 33622835200 May 20 15:37:20.581: INFO: Pod for on the node: cbdf5d3e-2e6e-4dfd-a2aa-b82dccfb2c3c-0, Cpu: 43900, Mem: 33622835200 May 20 15:37:20.581: INFO: Pod for on the node: cbdf5d3e-2e6e-4dfd-a2aa-b82dccfb2c3c-0, Cpu: 43900, Mem: 33622835200 May 20 15:37:20.581: INFO: Pod for on the node: cbdf5d3e-2e6e-4dfd-a2aa-b82dccfb2c3c-0, Cpu: 43900, Mem: 33622835200 May 20 15:37:20.581: INFO: Node: v1.21-worker, totalRequestedCPUResource: 526900, cpuAllocatableMil: 88000, cpuFraction: 1 May 20 15:37:20.581: INFO: Node: v1.21-worker, totalRequestedMemResource: 403578880000, memAllocatableVal: 67430219776, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 20 15:37:20.581: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 20 15:37:20.581: INFO: Pod for on the node: cbdf5d3e-2e6e-4dfd-a2aa-b82dccfb2c3c-0, Cpu: 43900, Mem: 33622835200 May 20 15:37:20.581: INFO: Pod for on the node: cbdf5d3e-2e6e-4dfd-a2aa-b82dccfb2c3c-0, Cpu: 43900, Mem: 33622835200 May 20 15:37:20.581: INFO: Pod for on the node: cbdf5d3e-2e6e-4dfd-a2aa-b82dccfb2c3c-0, Cpu: 43900, Mem: 33622835200 May 20 15:37:20.581: INFO: Pod for on the node: cbdf5d3e-2e6e-4dfd-a2aa-b82dccfb2c3c-0, Cpu: 43900, Mem: 33622835200 May 20 15:37:20.581: INFO: Pod for on the node: cbdf5d3e-2e6e-4dfd-a2aa-b82dccfb2c3c-0, Cpu: 43900, Mem: 33622835200 May 20 15:37:20.581: INFO: Pod for on the node: cbdf5d3e-2e6e-4dfd-a2aa-b82dccfb2c3c-0, Cpu: 43900, Mem: 33622835200 May 20 15:37:20.581: INFO: Pod for on the node: cbdf5d3e-2e6e-4dfd-a2aa-b82dccfb2c3c-0, Cpu: 43900, Mem: 33622835200 May 20 15:37:20.581: INFO: Pod for on the node: cbdf5d3e-2e6e-4dfd-a2aa-b82dccfb2c3c-0, Cpu: 43900, Mem: 33622835200 May 20 15:37:20.581: INFO: Pod for on the node: cbdf5d3e-2e6e-4dfd-a2aa-b82dccfb2c3c-0, Cpu: 43900, Mem: 33622835200 May 20 15:37:20.581: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 395200, cpuAllocatableMil: 88000, cpuFraction: 1 May 20 15:37:20.581: INFO: Node: v1.21-worker2, totalRequestedMemResource: 302710374400, memAllocatableVal: 67430219776, memFraction: 1 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-8796 to 1 STEP: Verify the pods should not scheduled to the node: v1.21-worker STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-8796, will wait for the garbage collector to delete the pods May 20 15:37:26.767: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 4.842498ms May 20 15:37:26.868: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 101.189458ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 15:37:43.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-8796" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:151 • [SLOW TEST:93.089 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:263 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":13,"completed":5,"skipped":1529,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 15:37:43.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 15:37:43.449: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 15:37:43.457: INFO: Waiting for terminating namespaces to be deleted... May 20 15:37:43.460: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 20 15:37:43.469: INFO: create-loop-devs-965k2 from kube-system started at 2021-05-16 10:45:24 +0000 UTC (1 container statuses recorded) May 20 15:37:43.469: INFO: Container loopdev ready: true, restart count 0 May 20 15:37:43.469: INFO: kindnet-2qtxh from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:37:43.469: INFO: Container kindnet-cni ready: true, restart count 1 May 20 15:37:43.469: INFO: kube-multus-ds-xst78 from kube-system started at 2021-05-16 10:45:26 +0000 UTC (1 container statuses recorded) May 20 15:37:43.469: INFO: Container kube-multus ready: true, restart count 0 May 20 15:37:43.469: INFO: kube-proxy-42vmb from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:37:43.469: INFO: Container kube-proxy ready: true, restart count 0 May 20 15:37:43.469: INFO: tune-sysctls-jcgnq from kube-system started at 2021-05-16 10:45:25 +0000 UTC (1 container statuses recorded) May 20 15:37:43.469: INFO: Container setsysctls ready: true, restart count 0 May 20 15:37:43.469: INFO: dashboard-metrics-scraper-856586f554-75x2x from kubernetes-dashboard started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 15:37:43.469: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 20 15:37:43.469: INFO: kubernetes-dashboard-78c79f97b4-w25tg from kubernetes-dashboard started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:37:43.469: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 20 15:37:43.469: INFO: controller-675995489c-xnj8v from metallb-system started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:37:43.469: INFO: Container controller ready: true, restart count 0 May 20 15:37:43.469: INFO: speaker-g5b8b from metallb-system started at 2021-05-16 10:45:27 +0000 UTC (1 container statuses recorded) May 20 15:37:43.469: INFO: Container speaker ready: true, restart count 0 May 20 15:37:43.469: INFO: contour-74948c9879-8866g from projectcontour started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 15:37:43.469: INFO: Container contour ready: true, restart count 0 May 20 15:37:43.469: INFO: contour-74948c9879-cqqjf from projectcontour started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:37:43.469: INFO: Container contour ready: true, restart count 0 May 20 15:37:43.469: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 20 15:37:43.478: INFO: coredns-558bd4d5db-r5ppk from kube-system started at 2021-05-20 14:31:04 +0000 UTC (1 container statuses recorded) May 20 15:37:43.478: INFO: Container coredns ready: true, restart count 0 May 20 15:37:43.479: INFO: coredns-558bd4d5db-xg8b5 from kube-system started at 2021-05-20 14:31:04 +0000 UTC (1 container statuses recorded) May 20 15:37:43.479: INFO: Container coredns ready: true, restart count 0 May 20 15:37:43.479: INFO: create-loop-devs-jq69v from kube-system started at 2021-05-20 14:05:01 +0000 UTC (1 container statuses recorded) May 20 15:37:43.479: INFO: Container loopdev ready: true, restart count 0 May 20 15:37:43.479: INFO: kindnet-xkwvl from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:37:43.479: INFO: Container kindnet-cni ready: true, restart count 1 May 20 15:37:43.479: INFO: kube-multus-ds-4pmk4 from kube-system started at 2021-05-20 14:04:43 +0000 UTC (1 container statuses recorded) May 20 15:37:43.479: INFO: Container kube-multus ready: true, restart count 0 May 20 15:37:43.479: INFO: kube-proxy-gh4rd from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:37:43.479: INFO: Container kube-proxy ready: true, restart count 0 May 20 15:37:43.479: INFO: tune-sysctls-pgxh4 from kube-system started at 2021-05-20 14:04:33 +0000 UTC (1 container statuses recorded) May 20 15:37:43.479: INFO: Container setsysctls ready: true, restart count 0 May 20 15:37:43.479: INFO: speaker-67fwk from metallb-system started at 2021-05-20 14:04:33 +0000 UTC (1 container statuses recorded) May 20 15:37:43.479: INFO: Container speaker ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-c4b48fb5-b3d5-497c-8dba-27d12b3ee4ac.1680d074b6536079], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Warning], Name = [filler-pod-c4b48fb5-b3d5-497c-8dba-27d12b3ee4ac.1680d0751a1a9c8f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Normal], Name = [filler-pod-c4b48fb5-b3d5-497c-8dba-27d12b3ee4ac.1680d0762248b9bf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2182/filler-pod-c4b48fb5-b3d5-497c-8dba-27d12b3ee4ac to v1.21-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-c4b48fb5-b3d5-497c-8dba-27d12b3ee4ac.1680d0764dfb00fc], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.127/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-c4b48fb5-b3d5-497c-8dba-27d12b3ee4ac.1680d076bfb24231], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c4b48fb5-b3d5-497c-8dba-27d12b3ee4ac.1680d076de1201cc], Reason = [Created], Message = [Created container filler-pod-c4b48fb5-b3d5-497c-8dba-27d12b3ee4ac] STEP: Considering event: Type = [Normal], Name = [filler-pod-c4b48fb5-b3d5-497c-8dba-27d12b3ee4ac.1680d076e8881da1], Reason = [Started], Message = [Started container filler-pod-c4b48fb5-b3d5-497c-8dba-27d12b3ee4ac] STEP: Considering event: Type = [Normal], Name = [without-label.1680d0743cb07b40], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2182/without-label to v1.21-worker2] STEP: Considering event: Type = [Normal], Name = [without-label.1680d0745abd9aab], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.118/24]] STEP: Considering event: Type = [Normal], Name = [without-label.1680d074678a5d49], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-label.1680d07468a80c2c], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.1680d07470121d92], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.1680d074b4f53bbb], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [additional-podaa79116e-85a5-4002-83c4-b4a71086ba92.1680d0770b826513], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient example.com/beardsecond.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 15:37:56.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2182" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:13.193 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":13,"completed":6,"skipped":1847,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:404 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 15:37:56.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:154 May 20 15:37:56.648: INFO: Waiting up to 1m0s for all nodes to be ready May 20 15:38:56.694: INFO: Waiting for terminating namespaces to be deleted... May 20 15:38:56.698: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 20 15:38:56.712: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 20 15:38:56.712: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 20 15:38:56.725: INFO: ComputeCPUMemFraction for node: v1.21-worker May 20 15:38:56.726: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:38:56.726: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:38:56.726: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:38:56.726: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:38:56.726: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:38:56.726: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:38:56.726: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:38:56.726: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:38:56.726: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:38:56.726: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:38:56.726: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:38:56.726: INFO: Node: v1.21-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 20 15:38:56.726: INFO: Node: v1.21-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 20 15:38:56.726: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 20 15:38:56.726: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:38:56.726: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:38:56.726: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:38:56.726: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:38:56.726: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:38:56.726: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:38:56.726: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:38:56.726: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:38:56.726: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 20 15:38:56.726: INFO: Node: v1.21-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:390 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:404 May 20 15:39:00.823: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 20 15:39:00.823: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:39:00.823: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:39:00.823: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:39:00.823: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:39:00.823: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:39:00.823: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:39:00.823: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:39:00.823: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:39:00.823: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 20 15:39:00.823: INFO: Node: v1.21-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 20 15:39:00.823: INFO: ComputeCPUMemFraction for node: v1.21-worker May 20 15:39:00.823: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:39:00.823: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:39:00.823: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:39:00.823: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:39:00.823: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:39:00.823: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:39:00.823: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:39:00.823: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:39:00.823: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:39:00.823: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:39:00.823: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:39:00.823: INFO: Node: v1.21-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 20 15:39:00.823: INFO: Node: v1.21-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 20 15:39:00.828: INFO: Waiting for running... May 20 15:39:05.888: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 20 15:39:10.955: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 20 15:39:10.955: INFO: Pod for on the node: f78e0551-273e-49e2-a127-e87bcc43c425-0, Cpu: 43900, Mem: 33622835200 May 20 15:39:10.955: INFO: Pod for on the node: f78e0551-273e-49e2-a127-e87bcc43c425-0, Cpu: 43900, Mem: 33622835200 May 20 15:39:10.955: INFO: Pod for on the node: f78e0551-273e-49e2-a127-e87bcc43c425-0, Cpu: 43900, Mem: 33622835200 May 20 15:39:10.955: INFO: Pod for on the node: f78e0551-273e-49e2-a127-e87bcc43c425-0, Cpu: 43900, Mem: 33622835200 May 20 15:39:10.955: INFO: Pod for on the node: f78e0551-273e-49e2-a127-e87bcc43c425-0, Cpu: 43900, Mem: 33622835200 May 20 15:39:10.955: INFO: Pod for on the node: f78e0551-273e-49e2-a127-e87bcc43c425-0, Cpu: 43900, Mem: 33622835200 May 20 15:39:10.955: INFO: Pod for on the node: f78e0551-273e-49e2-a127-e87bcc43c425-0, Cpu: 43900, Mem: 33622835200 May 20 15:39:10.955: INFO: Pod for on the node: f78e0551-273e-49e2-a127-e87bcc43c425-0, Cpu: 43900, Mem: 33622835200 May 20 15:39:10.955: INFO: Pod for on the node: f78e0551-273e-49e2-a127-e87bcc43c425-0, Cpu: 43900, Mem: 33622835200 May 20 15:39:10.955: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 395200, cpuAllocatableMil: 88000, cpuFraction: 1 May 20 15:39:10.955: INFO: Node: v1.21-worker2, totalRequestedMemResource: 302710374400, memAllocatableVal: 67430219776, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 20 15:39:10.955: INFO: ComputeCPUMemFraction for node: v1.21-worker May 20 15:39:10.955: INFO: Pod for on the node: f78e0551-273e-49e2-a127-e87bcc43c425-0, Cpu: 43900, Mem: 33622835200 May 20 15:39:10.955: INFO: Pod for on the node: f78e0551-273e-49e2-a127-e87bcc43c425-0, Cpu: 43900, Mem: 33622835200 May 20 15:39:10.955: INFO: Pod for on the node: f78e0551-273e-49e2-a127-e87bcc43c425-0, Cpu: 43900, Mem: 33622835200 May 20 15:39:10.955: INFO: Pod for on the node: f78e0551-273e-49e2-a127-e87bcc43c425-0, Cpu: 43900, Mem: 33622835200 May 20 15:39:10.955: INFO: Pod for on the node: f78e0551-273e-49e2-a127-e87bcc43c425-0, Cpu: 43900, Mem: 33622835200 May 20 15:39:10.955: INFO: Pod for on the node: f78e0551-273e-49e2-a127-e87bcc43c425-0, Cpu: 43900, Mem: 33622835200 May 20 15:39:10.955: INFO: Pod for on the node: f78e0551-273e-49e2-a127-e87bcc43c425-0, Cpu: 43900, Mem: 33622835200 May 20 15:39:10.955: INFO: Pod for on the node: f78e0551-273e-49e2-a127-e87bcc43c425-0, Cpu: 43900, Mem: 33622835200 May 20 15:39:10.955: INFO: Pod for on the node: f78e0551-273e-49e2-a127-e87bcc43c425-0, Cpu: 43900, Mem: 33622835200 May 20 15:39:10.955: INFO: Pod for on the node: f78e0551-273e-49e2-a127-e87bcc43c425-0, Cpu: 43900, Mem: 33622835200 May 20 15:39:10.955: INFO: Pod for on the node: f78e0551-273e-49e2-a127-e87bcc43c425-0, Cpu: 43900, Mem: 33622835200 May 20 15:39:10.955: INFO: Pod for on the node: f78e0551-273e-49e2-a127-e87bcc43c425-0, Cpu: 43900, Mem: 33622835200 May 20 15:39:10.955: INFO: Node: v1.21-worker, totalRequestedCPUResource: 526900, cpuAllocatableMil: 88000, cpuFraction: 1 May 20 15:39:10.955: INFO: Node: v1.21-worker, totalRequestedMemResource: 403578880000, memAllocatableVal: 67430219776, memFraction: 1 STEP: Run a ReplicaSet with 4 replicas on node "v1.21-worker2" STEP: Verifying if the test-pod lands on node "v1.21-worker" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:398 STEP: removing the label kubernetes.io/e2e-pts-score off the node v1.21-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node v1.21-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 15:39:25.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-5437" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:151 • [SLOW TEST:88.430 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:386 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:404 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":13,"completed":7,"skipped":2739,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 15:39:25.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 15:39:25.080: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 15:39:25.089: INFO: Waiting for terminating namespaces to be deleted... May 20 15:39:25.092: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 20 15:39:25.101: INFO: create-loop-devs-965k2 from kube-system started at 2021-05-16 10:45:24 +0000 UTC (1 container statuses recorded) May 20 15:39:25.101: INFO: Container loopdev ready: true, restart count 0 May 20 15:39:25.101: INFO: kindnet-2qtxh from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:39:25.101: INFO: Container kindnet-cni ready: true, restart count 1 May 20 15:39:25.101: INFO: kube-multus-ds-xst78 from kube-system started at 2021-05-16 10:45:26 +0000 UTC (1 container statuses recorded) May 20 15:39:25.101: INFO: Container kube-multus ready: true, restart count 0 May 20 15:39:25.101: INFO: kube-proxy-42vmb from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:39:25.101: INFO: Container kube-proxy ready: true, restart count 0 May 20 15:39:25.101: INFO: tune-sysctls-jcgnq from kube-system started at 2021-05-16 10:45:25 +0000 UTC (1 container statuses recorded) May 20 15:39:25.101: INFO: Container setsysctls ready: true, restart count 0 May 20 15:39:25.101: INFO: dashboard-metrics-scraper-856586f554-75x2x from kubernetes-dashboard started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 15:39:25.101: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 20 15:39:25.101: INFO: kubernetes-dashboard-78c79f97b4-w25tg from kubernetes-dashboard started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:39:25.102: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 20 15:39:25.102: INFO: controller-675995489c-xnj8v from metallb-system started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:39:25.102: INFO: Container controller ready: true, restart count 0 May 20 15:39:25.102: INFO: speaker-g5b8b from metallb-system started at 2021-05-16 10:45:27 +0000 UTC (1 container statuses recorded) May 20 15:39:25.102: INFO: Container speaker ready: true, restart count 0 May 20 15:39:25.102: INFO: contour-74948c9879-8866g from projectcontour started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 15:39:25.102: INFO: Container contour ready: true, restart count 0 May 20 15:39:25.102: INFO: contour-74948c9879-cqqjf from projectcontour started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:39:25.102: INFO: Container contour ready: true, restart count 0 May 20 15:39:25.102: INFO: test-pod from sched-priority-5437 started at 2021-05-20 15:39:14 +0000 UTC (1 container statuses recorded) May 20 15:39:25.102: INFO: Container test-pod ready: true, restart count 0 May 20 15:39:25.102: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 20 15:39:25.111: INFO: coredns-558bd4d5db-r5ppk from kube-system started at 2021-05-20 14:31:04 +0000 UTC (1 container statuses recorded) May 20 15:39:25.111: INFO: Container coredns ready: true, restart count 0 May 20 15:39:25.111: INFO: coredns-558bd4d5db-xg8b5 from kube-system started at 2021-05-20 14:31:04 +0000 UTC (1 container statuses recorded) May 20 15:39:25.111: INFO: Container coredns ready: true, restart count 0 May 20 15:39:25.111: INFO: create-loop-devs-jq69v from kube-system started at 2021-05-20 14:05:01 +0000 UTC (1 container statuses recorded) May 20 15:39:25.111: INFO: Container loopdev ready: true, restart count 0 May 20 15:39:25.111: INFO: kindnet-xkwvl from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:39:25.111: INFO: Container kindnet-cni ready: true, restart count 1 May 20 15:39:25.111: INFO: kube-multus-ds-4pmk4 from kube-system started at 2021-05-20 14:04:43 +0000 UTC (1 container statuses recorded) May 20 15:39:25.111: INFO: Container kube-multus ready: true, restart count 0 May 20 15:39:25.111: INFO: kube-proxy-gh4rd from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:39:25.111: INFO: Container kube-proxy ready: true, restart count 0 May 20 15:39:25.111: INFO: tune-sysctls-pgxh4 from kube-system started at 2021-05-20 14:04:33 +0000 UTC (1 container statuses recorded) May 20 15:39:25.111: INFO: Container setsysctls ready: true, restart count 0 May 20 15:39:25.111: INFO: speaker-67fwk from metallb-system started at 2021-05-20 14:04:33 +0000 UTC (1 container statuses recorded) May 20 15:39:25.111: INFO: Container speaker ready: true, restart count 0 May 20 15:39:25.111: INFO: rs-e2e-pts-score-4d75s from sched-priority-5437 started at 2021-05-20 15:39:10 +0000 UTC (1 container statuses recorded) May 20 15:39:25.111: INFO: Container e2e-pts-score ready: true, restart count 0 May 20 15:39:25.111: INFO: rs-e2e-pts-score-dxf4x from sched-priority-5437 started at 2021-05-20 15:39:10 +0000 UTC (1 container statuses recorded) May 20 15:39:25.111: INFO: Container e2e-pts-score ready: true, restart count 0 May 20 15:39:25.111: INFO: rs-e2e-pts-score-fncnz from sched-priority-5437 started at 2021-05-20 15:39:10 +0000 UTC (1 container statuses recorded) May 20 15:39:25.111: INFO: Container e2e-pts-score ready: true, restart count 0 May 20 15:39:25.111: INFO: rs-e2e-pts-score-xdg5f from sched-priority-5437 started at 2021-05-20 15:39:10 +0000 UTC (1 container statuses recorded) May 20 15:39:25.111: INFO: Container e2e-pts-score ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-64e8fb19-b453-4b62-bb5d-3f66e73eacaa=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-73b89e20-ae4a-4066-8abd-5b00bdf65c37 testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-73b89e20-ae4a-4066-8abd-5b00bdf65c37 off the node v1.21-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-73b89e20-ae4a-4066-8abd-5b00bdf65c37 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-64e8fb19-b453-4b62-bb5d-3f66e73eacaa=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 15:39:29.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3434" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":13,"completed":8,"skipped":2873,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 15:39:29.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 15:39:29.266: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 15:39:29.274: INFO: Waiting for terminating namespaces to be deleted... May 20 15:39:29.277: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 20 15:39:29.287: INFO: create-loop-devs-965k2 from kube-system started at 2021-05-16 10:45:24 +0000 UTC (1 container statuses recorded) May 20 15:39:29.287: INFO: Container loopdev ready: true, restart count 0 May 20 15:39:29.287: INFO: kindnet-2qtxh from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:39:29.287: INFO: Container kindnet-cni ready: true, restart count 1 May 20 15:39:29.287: INFO: kube-multus-ds-xst78 from kube-system started at 2021-05-16 10:45:26 +0000 UTC (1 container statuses recorded) May 20 15:39:29.287: INFO: Container kube-multus ready: true, restart count 0 May 20 15:39:29.287: INFO: kube-proxy-42vmb from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:39:29.287: INFO: Container kube-proxy ready: true, restart count 0 May 20 15:39:29.287: INFO: tune-sysctls-jcgnq from kube-system started at 2021-05-16 10:45:25 +0000 UTC (1 container statuses recorded) May 20 15:39:29.287: INFO: Container setsysctls ready: true, restart count 0 May 20 15:39:29.287: INFO: dashboard-metrics-scraper-856586f554-75x2x from kubernetes-dashboard started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 15:39:29.287: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 20 15:39:29.287: INFO: kubernetes-dashboard-78c79f97b4-w25tg from kubernetes-dashboard started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:39:29.287: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 20 15:39:29.287: INFO: controller-675995489c-xnj8v from metallb-system started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:39:29.287: INFO: Container controller ready: true, restart count 0 May 20 15:39:29.287: INFO: speaker-g5b8b from metallb-system started at 2021-05-16 10:45:27 +0000 UTC (1 container statuses recorded) May 20 15:39:29.287: INFO: Container speaker ready: true, restart count 0 May 20 15:39:29.287: INFO: contour-74948c9879-8866g from projectcontour started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 15:39:29.287: INFO: Container contour ready: true, restart count 0 May 20 15:39:29.287: INFO: contour-74948c9879-cqqjf from projectcontour started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:39:29.287: INFO: Container contour ready: true, restart count 0 May 20 15:39:29.287: INFO: test-pod from sched-priority-5437 started at 2021-05-20 15:39:14 +0000 UTC (1 container statuses recorded) May 20 15:39:29.287: INFO: Container test-pod ready: true, restart count 0 May 20 15:39:29.287: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 20 15:39:29.296: INFO: coredns-558bd4d5db-r5ppk from kube-system started at 2021-05-20 14:31:04 +0000 UTC (1 container statuses recorded) May 20 15:39:29.296: INFO: Container coredns ready: true, restart count 0 May 20 15:39:29.296: INFO: coredns-558bd4d5db-xg8b5 from kube-system started at 2021-05-20 14:31:04 +0000 UTC (1 container statuses recorded) May 20 15:39:29.296: INFO: Container coredns ready: true, restart count 0 May 20 15:39:29.296: INFO: create-loop-devs-jq69v from kube-system started at 2021-05-20 14:05:01 +0000 UTC (1 container statuses recorded) May 20 15:39:29.296: INFO: Container loopdev ready: true, restart count 0 May 20 15:39:29.296: INFO: kindnet-xkwvl from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:39:29.296: INFO: Container kindnet-cni ready: true, restart count 1 May 20 15:39:29.296: INFO: kube-multus-ds-4pmk4 from kube-system started at 2021-05-20 14:04:43 +0000 UTC (1 container statuses recorded) May 20 15:39:29.296: INFO: Container kube-multus ready: true, restart count 0 May 20 15:39:29.296: INFO: kube-proxy-gh4rd from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:39:29.296: INFO: Container kube-proxy ready: true, restart count 0 May 20 15:39:29.296: INFO: tune-sysctls-pgxh4 from kube-system started at 2021-05-20 14:04:33 +0000 UTC (1 container statuses recorded) May 20 15:39:29.296: INFO: Container setsysctls ready: true, restart count 0 May 20 15:39:29.296: INFO: speaker-67fwk from metallb-system started at 2021-05-20 14:04:33 +0000 UTC (1 container statuses recorded) May 20 15:39:29.296: INFO: Container speaker ready: true, restart count 0 May 20 15:39:29.296: INFO: with-tolerations from sched-pred-3434 started at 2021-05-20 15:39:27 +0000 UTC (1 container statuses recorded) May 20 15:39:29.296: INFO: Container with-tolerations ready: true, restart count 0 May 20 15:39:29.296: INFO: rs-e2e-pts-score-4d75s from sched-priority-5437 started at 2021-05-20 15:39:10 +0000 UTC (1 container statuses recorded) May 20 15:39:29.296: INFO: Container e2e-pts-score ready: true, restart count 0 May 20 15:39:29.296: INFO: rs-e2e-pts-score-dxf4x from sched-priority-5437 started at 2021-05-20 15:39:10 +0000 UTC (1 container statuses recorded) May 20 15:39:29.296: INFO: Container e2e-pts-score ready: true, restart count 0 May 20 15:39:29.296: INFO: rs-e2e-pts-score-fncnz from sched-priority-5437 started at 2021-05-20 15:39:10 +0000 UTC (1 container statuses recorded) May 20 15:39:29.296: INFO: Container e2e-pts-score ready: true, restart count 0 May 20 15:39:29.296: INFO: rs-e2e-pts-score-xdg5f from sched-priority-5437 started at 2021-05-20 15:39:10 +0000 UTC (1 container statuses recorded) May 20 15:39:29.296: INFO: Container e2e-pts-score ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a86af1b6-7b92-4174-91b4-91a0f38e2056 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-a86af1b6-7b92-4174-91b4-91a0f38e2056 off the node v1.21-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-a86af1b6-7b92-4174-91b4-91a0f38e2056 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 15:39:33.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5174" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":13,"completed":9,"skipped":3152,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 15:39:33.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 15:39:33.402: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 15:39:33.410: INFO: Waiting for terminating namespaces to be deleted... May 20 15:39:33.414: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 20 15:39:33.423: INFO: create-loop-devs-965k2 from kube-system started at 2021-05-16 10:45:24 +0000 UTC (1 container statuses recorded) May 20 15:39:33.423: INFO: Container loopdev ready: true, restart count 0 May 20 15:39:33.423: INFO: kindnet-2qtxh from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:39:33.424: INFO: Container kindnet-cni ready: true, restart count 1 May 20 15:39:33.424: INFO: kube-multus-ds-xst78 from kube-system started at 2021-05-16 10:45:26 +0000 UTC (1 container statuses recorded) May 20 15:39:33.424: INFO: Container kube-multus ready: true, restart count 0 May 20 15:39:33.424: INFO: kube-proxy-42vmb from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:39:33.424: INFO: Container kube-proxy ready: true, restart count 0 May 20 15:39:33.424: INFO: tune-sysctls-jcgnq from kube-system started at 2021-05-16 10:45:25 +0000 UTC (1 container statuses recorded) May 20 15:39:33.424: INFO: Container setsysctls ready: true, restart count 0 May 20 15:39:33.424: INFO: dashboard-metrics-scraper-856586f554-75x2x from kubernetes-dashboard started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 15:39:33.424: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 20 15:39:33.424: INFO: kubernetes-dashboard-78c79f97b4-w25tg from kubernetes-dashboard started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:39:33.424: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 20 15:39:33.424: INFO: controller-675995489c-xnj8v from metallb-system started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:39:33.424: INFO: Container controller ready: true, restart count 0 May 20 15:39:33.424: INFO: speaker-g5b8b from metallb-system started at 2021-05-16 10:45:27 +0000 UTC (1 container statuses recorded) May 20 15:39:33.424: INFO: Container speaker ready: true, restart count 0 May 20 15:39:33.424: INFO: contour-74948c9879-8866g from projectcontour started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 15:39:33.424: INFO: Container contour ready: true, restart count 0 May 20 15:39:33.424: INFO: contour-74948c9879-cqqjf from projectcontour started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:39:33.424: INFO: Container contour ready: true, restart count 0 May 20 15:39:33.424: INFO: test-pod from sched-priority-5437 started at 2021-05-20 15:39:14 +0000 UTC (1 container statuses recorded) May 20 15:39:33.424: INFO: Container test-pod ready: false, restart count 0 May 20 15:39:33.424: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 20 15:39:33.432: INFO: coredns-558bd4d5db-r5ppk from kube-system started at 2021-05-20 14:31:04 +0000 UTC (1 container statuses recorded) May 20 15:39:33.432: INFO: Container coredns ready: true, restart count 0 May 20 15:39:33.432: INFO: coredns-558bd4d5db-xg8b5 from kube-system started at 2021-05-20 14:31:04 +0000 UTC (1 container statuses recorded) May 20 15:39:33.432: INFO: Container coredns ready: true, restart count 0 May 20 15:39:33.432: INFO: create-loop-devs-jq69v from kube-system started at 2021-05-20 14:05:01 +0000 UTC (1 container statuses recorded) May 20 15:39:33.432: INFO: Container loopdev ready: true, restart count 0 May 20 15:39:33.432: INFO: kindnet-xkwvl from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:39:33.432: INFO: Container kindnet-cni ready: true, restart count 1 May 20 15:39:33.432: INFO: kube-multus-ds-4pmk4 from kube-system started at 2021-05-20 14:04:43 +0000 UTC (1 container statuses recorded) May 20 15:39:33.432: INFO: Container kube-multus ready: true, restart count 0 May 20 15:39:33.432: INFO: kube-proxy-gh4rd from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:39:33.432: INFO: Container kube-proxy ready: true, restart count 0 May 20 15:39:33.432: INFO: tune-sysctls-pgxh4 from kube-system started at 2021-05-20 14:04:33 +0000 UTC (1 container statuses recorded) May 20 15:39:33.432: INFO: Container setsysctls ready: true, restart count 0 May 20 15:39:33.432: INFO: speaker-67fwk from metallb-system started at 2021-05-20 14:04:33 +0000 UTC (1 container statuses recorded) May 20 15:39:33.432: INFO: Container speaker ready: true, restart count 0 May 20 15:39:33.432: INFO: with-tolerations from sched-pred-3434 started at 2021-05-20 15:39:27 +0000 UTC (1 container statuses recorded) May 20 15:39:33.432: INFO: Container with-tolerations ready: true, restart count 0 May 20 15:39:33.432: INFO: with-labels from sched-pred-5174 started at 2021-05-20 15:39:31 +0000 UTC (1 container statuses recorded) May 20 15:39:33.432: INFO: Container with-labels ready: true, restart count 0 May 20 15:39:33.432: INFO: rs-e2e-pts-score-4d75s from sched-priority-5437 started at 2021-05-20 15:39:10 +0000 UTC (1 container statuses recorded) May 20 15:39:33.432: INFO: Container e2e-pts-score ready: false, restart count 0 May 20 15:39:33.432: INFO: rs-e2e-pts-score-dxf4x from sched-priority-5437 started at 2021-05-20 15:39:10 +0000 UTC (1 container statuses recorded) May 20 15:39:33.432: INFO: Container e2e-pts-score ready: false, restart count 0 May 20 15:39:33.432: INFO: rs-e2e-pts-score-fncnz from sched-priority-5437 started at 2021-05-20 15:39:10 +0000 UTC (1 container statuses recorded) May 20 15:39:33.432: INFO: Container e2e-pts-score ready: false, restart count 0 May 20 15:39:33.432: INFO: rs-e2e-pts-score-xdg5f from sched-priority-5437 started at 2021-05-20 15:39:10 +0000 UTC (1 container statuses recorded) May 20 15:39:33.432: INFO: Container e2e-pts-score ready: false, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1680d08dd6aedd15], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity/selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 15:39:34.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7566" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":13,"completed":10,"skipped":3393,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 15:39:34.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 15:39:34.515: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 15:39:34.522: INFO: Waiting for terminating namespaces to be deleted... May 20 15:39:34.525: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 20 15:39:34.534: INFO: create-loop-devs-965k2 from kube-system started at 2021-05-16 10:45:24 +0000 UTC (1 container statuses recorded) May 20 15:39:34.534: INFO: Container loopdev ready: true, restart count 0 May 20 15:39:34.534: INFO: kindnet-2qtxh from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:39:34.534: INFO: Container kindnet-cni ready: true, restart count 1 May 20 15:39:34.534: INFO: kube-multus-ds-xst78 from kube-system started at 2021-05-16 10:45:26 +0000 UTC (1 container statuses recorded) May 20 15:39:34.534: INFO: Container kube-multus ready: true, restart count 0 May 20 15:39:34.534: INFO: kube-proxy-42vmb from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:39:34.534: INFO: Container kube-proxy ready: true, restart count 0 May 20 15:39:34.534: INFO: tune-sysctls-jcgnq from kube-system started at 2021-05-16 10:45:25 +0000 UTC (1 container statuses recorded) May 20 15:39:34.534: INFO: Container setsysctls ready: true, restart count 0 May 20 15:39:34.534: INFO: dashboard-metrics-scraper-856586f554-75x2x from kubernetes-dashboard started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 15:39:34.534: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 20 15:39:34.534: INFO: kubernetes-dashboard-78c79f97b4-w25tg from kubernetes-dashboard started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:39:34.534: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 20 15:39:34.534: INFO: controller-675995489c-xnj8v from metallb-system started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:39:34.534: INFO: Container controller ready: true, restart count 0 May 20 15:39:34.534: INFO: speaker-g5b8b from metallb-system started at 2021-05-16 10:45:27 +0000 UTC (1 container statuses recorded) May 20 15:39:34.534: INFO: Container speaker ready: true, restart count 0 May 20 15:39:34.534: INFO: contour-74948c9879-8866g from projectcontour started at 2021-05-16 10:45:29 +0000 UTC (1 container statuses recorded) May 20 15:39:34.534: INFO: Container contour ready: true, restart count 0 May 20 15:39:34.534: INFO: contour-74948c9879-cqqjf from projectcontour started at 2021-05-20 14:04:28 +0000 UTC (1 container statuses recorded) May 20 15:39:34.534: INFO: Container contour ready: true, restart count 0 May 20 15:39:34.534: INFO: test-pod from sched-priority-5437 started at 2021-05-20 15:39:14 +0000 UTC (1 container statuses recorded) May 20 15:39:34.534: INFO: Container test-pod ready: false, restart count 0 May 20 15:39:34.534: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 20 15:39:34.543: INFO: coredns-558bd4d5db-r5ppk from kube-system started at 2021-05-20 14:31:04 +0000 UTC (1 container statuses recorded) May 20 15:39:34.543: INFO: Container coredns ready: true, restart count 0 May 20 15:39:34.543: INFO: coredns-558bd4d5db-xg8b5 from kube-system started at 2021-05-20 14:31:04 +0000 UTC (1 container statuses recorded) May 20 15:39:34.543: INFO: Container coredns ready: true, restart count 0 May 20 15:39:34.543: INFO: create-loop-devs-jq69v from kube-system started at 2021-05-20 14:05:01 +0000 UTC (1 container statuses recorded) May 20 15:39:34.543: INFO: Container loopdev ready: true, restart count 0 May 20 15:39:34.543: INFO: kindnet-xkwvl from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:39:34.543: INFO: Container kindnet-cni ready: true, restart count 1 May 20 15:39:34.543: INFO: kube-multus-ds-4pmk4 from kube-system started at 2021-05-20 14:04:43 +0000 UTC (1 container statuses recorded) May 20 15:39:34.543: INFO: Container kube-multus ready: true, restart count 0 May 20 15:39:34.543: INFO: kube-proxy-gh4rd from kube-system started at 2021-05-16 10:44:23 +0000 UTC (1 container statuses recorded) May 20 15:39:34.543: INFO: Container kube-proxy ready: true, restart count 0 May 20 15:39:34.543: INFO: tune-sysctls-pgxh4 from kube-system started at 2021-05-20 14:04:33 +0000 UTC (1 container statuses recorded) May 20 15:39:34.543: INFO: Container setsysctls ready: true, restart count 0 May 20 15:39:34.543: INFO: speaker-67fwk from metallb-system started at 2021-05-20 14:04:33 +0000 UTC (1 container statuses recorded) May 20 15:39:34.543: INFO: Container speaker ready: true, restart count 0 May 20 15:39:34.543: INFO: with-tolerations from sched-pred-3434 started at 2021-05-20 15:39:27 +0000 UTC (1 container statuses recorded) May 20 15:39:34.543: INFO: Container with-tolerations ready: true, restart count 0 May 20 15:39:34.543: INFO: with-labels from sched-pred-5174 started at 2021-05-20 15:39:31 +0000 UTC (1 container statuses recorded) May 20 15:39:34.543: INFO: Container with-labels ready: true, restart count 0 May 20 15:39:34.543: INFO: rs-e2e-pts-score-4d75s from sched-priority-5437 started at 2021-05-20 15:39:10 +0000 UTC (1 container statuses recorded) May 20 15:39:34.543: INFO: Container e2e-pts-score ready: false, restart count 0 May 20 15:39:34.543: INFO: rs-e2e-pts-score-dxf4x from sched-priority-5437 started at 2021-05-20 15:39:10 +0000 UTC (1 container statuses recorded) May 20 15:39:34.543: INFO: Container e2e-pts-score ready: false, restart count 0 May 20 15:39:34.543: INFO: rs-e2e-pts-score-fncnz from sched-priority-5437 started at 2021-05-20 15:39:10 +0000 UTC (1 container statuses recorded) May 20 15:39:34.543: INFO: Container e2e-pts-score ready: false, restart count 0 May 20 15:39:34.543: INFO: rs-e2e-pts-score-xdg5f from sched-priority-5437 started at 2021-05-20 15:39:10 +0000 UTC (1 container statuses recorded) May 20 15:39:34.543: INFO: Container e2e-pts-score ready: false, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-86f9f313-19b9-4074-b674-cc22f9b385d9=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-8891517b-f51a-4ae3-83e2-a37dcfd07803 testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.1680d08e17a14d26], Reason = [Scheduled], Message = [Successfully assigned sched-pred-50/without-toleration to v1.21-worker] STEP: Considering event: Type = [Normal], Name = [without-toleration.1680d08e34049dca], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.101/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.1680d08e3fc14d75], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-toleration.1680d08e40d72f15], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.1680d08e498013aa], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.1680d08e8fe390e9], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.1680d08e923452ef], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-86f9f313-19b9-4074-b674-cc22f9b385d9: testing-taint-value}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.1680d08e923452ef], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-86f9f313-19b9-4074-b674-cc22f9b385d9: testing-taint-value}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.1680d08e17a14d26], Reason = [Scheduled], Message = [Successfully assigned sched-pred-50/without-toleration to v1.21-worker] STEP: Considering event: Type = [Normal], Name = [without-toleration.1680d08e34049dca], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.101/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.1680d08e3fc14d75], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-toleration.1680d08e40d72f15], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.1680d08e498013aa], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.1680d08e8fe390e9], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-86f9f313-19b9-4074-b674-cc22f9b385d9=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.1680d08ef7467253], Reason = [Scheduled], Message = [Successfully assigned sched-pred-50/still-no-tolerations to v1.21-worker] STEP: removing the label kubernetes.io/e2e-label-key-8891517b-f51a-4ae3-83e2-a37dcfd07803 off the node v1.21-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-8891517b-f51a-4ae3-83e2-a37dcfd07803 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-86f9f313-19b9-4074-b674-cc22f9b385d9=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 15:39:38.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-50" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":13,"completed":11,"skipped":4191,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:179 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 15:39:38.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:154 May 20 15:39:38.714: INFO: Waiting up to 1m0s for all nodes to be ready May 20 15:40:38.765: INFO: Waiting for terminating namespaces to be deleted... May 20 15:40:38.769: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 20 15:40:38.783: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 20 15:40:38.783: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 20 15:40:38.798: INFO: ComputeCPUMemFraction for node: v1.21-worker May 20 15:40:38.798: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:40:38.798: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:40:38.798: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:40:38.798: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:40:38.798: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:40:38.798: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:40:38.798: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:40:38.798: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:40:38.798: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:40:38.798: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:40:38.798: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:40:38.798: INFO: Node: v1.21-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 20 15:40:38.798: INFO: Node: v1.21-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 20 15:40:38.798: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 20 15:40:38.798: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:40:38.798: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:40:38.798: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:40:38.798: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:40:38.799: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:40:38.799: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:40:38.799: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:40:38.799: INFO: Pod for on the node: envoy-k7tkp, Cpu: 200, Mem: 419430400 May 20 15:40:38.799: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 20 15:40:38.799: INFO: Node: v1.21-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:179 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname May 20 15:40:40.839: INFO: ComputeCPUMemFraction for node: v1.21-worker May 20 15:40:40.839: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:40.839: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:40.839: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:40.839: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:40.839: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:40.839: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:40.839: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:40.839: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:40.839: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:40.839: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:40.839: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:40.839: INFO: Node: v1.21-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 20 15:40:40.839: INFO: Node: v1.21-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 20 15:40:40.839: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 20 15:40:40.839: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:40.839: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:40.839: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:40.839: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:40.839: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:40.839: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:40.839: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:40.839: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:40.839: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:40.839: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 20 15:40:40.839: INFO: Node: v1.21-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 20 15:40:40.844: INFO: Waiting for running... May 20 15:40:45.901: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 20 15:40:50.969: INFO: ComputeCPUMemFraction for node: v1.21-worker May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Node: v1.21-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 20 15:40:50.969: INFO: Node: v1.21-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 20 15:40:50.969: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 15:40:50.969: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 20 15:40:50.969: INFO: Node: v1.21-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 15:41:05.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-5220" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:151 • [SLOW TEST:86.340 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:179 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":13,"completed":12,"skipped":4369,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 20 15:41:05.042: INFO: Running AfterSuite actions on all nodes May 20 15:41:05.042: INFO: Running AfterSuite actions on node 1 May 20 15:41:05.042: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":13,"completed":12,"skipped":5758,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]"]} Summarizing 1 Failure: [Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:165 Ran 13 of 5771 Specs in 1098.899 seconds FAIL! -- 12 Passed | 1 Failed | 0 Pending | 5758 Skipped --- FAIL: TestE2E (1098.96s) FAIL Ginkgo ran 1 suite in 18m20.439392478s Test Suite Failed