I0325 11:25:35.244703 8 e2e.go:129] Starting e2e run "b48d55be-bde3-46f3-ac69-1a01690fb6ea" on Ginkgo node 1 {"msg":"Test Suite starting","total":16,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1616671533 - Will randomize all specs Will run 16 of 5737 specs Mar 25 11:25:35.266: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:25:35.269: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 25 11:25:35.367: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 25 11:25:35.600: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 25 11:25:35.600: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 25 11:25:35.600: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 25 11:25:35.779: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 25 11:25:35.779: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 25 11:25:35.779: INFO: e2e test version: v1.21.0-beta.1 Mar 25 11:25:35.780: INFO: kube-apiserver version: v1.21.0-alpha.0 Mar 25 11:25:35.780: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:25:36.001: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:76 [BeforeEach] [sig-scheduling] Multi-AZ Clusters /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:25:36.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename multi-az Mar 25 11:25:38.443: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] Multi-AZ Clusters /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:47 Mar 25 11:25:38.452: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-scheduling] Multi-AZ Clusters /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:25:38.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-5877" for this suite. [AfterEach] [sig-scheduling] Multi-AZ Clusters /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:67 S [SKIPPING] in Spec Setup (BeforeEach) [2.669 seconds] [sig-scheduling] Multi-AZ Clusters /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should spread the pods of a replication controller across zones [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:76 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:48 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:25:38.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 25 11:25:39.603: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 25 11:25:39.805: INFO: Waiting for terminating namespaces to be deleted... Mar 25 11:25:39.925: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 25 11:25:39.929: INFO: pod-handle-http-request from container-lifecycle-hook-8027 started at 2021-03-25 11:25:09 +0000 UTC (1 container statuses recorded) Mar 25 11:25:39.929: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 11:25:39.929: INFO: busybox-50f92596-0685-4fbb-ba11-8acc8326f510 from container-probe-6917 started at 2021-03-25 11:22:32 +0000 UTC (1 container statuses recorded) Mar 25 11:25:39.929: INFO: Container busybox ready: true, restart count 0 Mar 25 11:25:39.929: INFO: kindnet-bpcmh from kube-system started at 2021-03-25 11:19:46 +0000 UTC (1 container statuses recorded) Mar 25 11:25:39.929: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:25:39.929: INFO: kube-proxy-kjrrj from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 11:25:39.929: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:25:39.929: INFO: back-off-cap from pods-708 started at 2021-03-25 11:22:11 +0000 UTC (1 container statuses recorded) Mar 25 11:25:39.930: INFO: Container back-off-cap ready: false, restart count 4 Mar 25 11:25:39.930: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 25 11:25:40.013: INFO: pod-with-poststart-exec-hook from container-lifecycle-hook-8027 started at 2021-03-25 11:25:15 +0000 UTC (1 container statuses recorded) Mar 25 11:25:40.013: INFO: Container pod-with-poststart-exec-hook ready: false, restart count 0 Mar 25 11:25:40.013: INFO: pvc-volume-tester-gqglb from csi-mock-volumes-7987 started at 2021-03-24 09:41:54 +0000 UTC (1 container statuses recorded) Mar 25 11:25:40.013: INFO: Container volume-tester ready: false, restart count 0 Mar 25 11:25:40.013: INFO: kindnet-7xphn from kube-system started at 2021-03-24 20:36:55 +0000 UTC (1 container statuses recorded) Mar 25 11:25:40.013: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:25:40.013: INFO: kube-proxy-dv4wd from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 11:25:40.013: INFO: Container kube-proxy ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-59fc954c-31c0-475d-8f8e-3803b9ee1fd2.166f9234f8f8d8c1], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Normal], Name = [filler-pod-59fc954c-31c0-475d-8f8e-3803b9ee1fd2.166f9236774810a9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7697/filler-pod-59fc954c-31c0-475d-8f8e-3803b9ee1fd2 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-59fc954c-31c0-475d-8f8e-3803b9ee1fd2.166f92372d15b155], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-59fc954c-31c0-475d-8f8e-3803b9ee1fd2.166f92379f902d2a], Reason = [Created], Message = [Created container filler-pod-59fc954c-31c0-475d-8f8e-3803b9ee1fd2] STEP: Considering event: Type = [Normal], Name = [filler-pod-59fc954c-31c0-475d-8f8e-3803b9ee1fd2.166f9237b67710df], Reason = [Started], Message = [Started container filler-pod-59fc954c-31c0-475d-8f8e-3803b9ee1fd2] STEP: Considering event: Type = [Normal], Name = [without-label.166f92334ca82223], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7697/without-label to latest-worker2] STEP: Considering event: Type = [Normal], Name = [without-label.166f9233b501b7f6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-label.166f92340f1841bc], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.166f9234232716ea], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.166f9234c4ce7e75], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [additional-pod9deb306f-fc08-41eb-8cf4-783932218b71.166f9237d93dd39b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient example.com/beardsecond.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:26:01.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7697" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:22.791 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":16,"completed":1,"skipped":716,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:26:01.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 25 11:26:02.454: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 25 11:26:02.838: INFO: Waiting for terminating namespaces to be deleted... Mar 25 11:26:02.957: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 25 11:26:03.030: INFO: pod-handle-http-request from container-lifecycle-hook-8027 started at 2021-03-25 11:25:09 +0000 UTC (1 container statuses recorded) Mar 25 11:26:03.030: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 11:26:03.030: INFO: busybox-50f92596-0685-4fbb-ba11-8acc8326f510 from container-probe-6917 started at 2021-03-25 11:22:32 +0000 UTC (1 container statuses recorded) Mar 25 11:26:03.030: INFO: Container busybox ready: true, restart count 0 Mar 25 11:26:03.030: INFO: kindnet-bpcmh from kube-system started at 2021-03-25 11:19:46 +0000 UTC (1 container statuses recorded) Mar 25 11:26:03.030: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:26:03.030: INFO: kube-proxy-kjrrj from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 11:26:03.030: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:26:03.030: INFO: back-off-cap from pods-708 started at 2021-03-25 11:22:11 +0000 UTC (1 container statuses recorded) Mar 25 11:26:03.030: INFO: Container back-off-cap ready: false, restart count 5 Mar 25 11:26:03.030: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 25 11:26:03.134: INFO: pod-with-poststart-exec-hook from container-lifecycle-hook-8027 started at 2021-03-25 11:25:15 +0000 UTC (1 container statuses recorded) Mar 25 11:26:03.134: INFO: Container pod-with-poststart-exec-hook ready: false, restart count 0 Mar 25 11:26:03.134: INFO: pvc-volume-tester-gqglb from csi-mock-volumes-7987 started at 2021-03-24 09:41:54 +0000 UTC (1 container statuses recorded) Mar 25 11:26:03.134: INFO: Container volume-tester ready: false, restart count 0 Mar 25 11:26:03.134: INFO: kindnet-7xphn from kube-system started at 2021-03-24 20:36:55 +0000 UTC (1 container statuses recorded) Mar 25 11:26:03.134: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:26:03.134: INFO: kube-proxy-dv4wd from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 11:26:03.134: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:26:03.134: INFO: filler-pod-59fc954c-31c0-475d-8f8e-3803b9ee1fd2 from sched-pred-7697 started at 2021-03-25 11:25:53 +0000 UTC (1 container statuses recorded) Mar 25 11:26:03.134: INFO: Container filler-pod-59fc954c-31c0-475d-8f8e-3803b9ee1fd2 ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-986c05b1-d903-4c74-b3c7-0306bff307c3=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-37243c8e-6a56-461f-ada8-2a1006f34340 testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-37243c8e-6a56-461f-ada8-2a1006f34340 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-37243c8e-6a56-461f-ada8-2a1006f34340 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-986c05b1-d903-4c74-b3c7-0306bff307c3=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:26:23.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7252" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:21.846 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":16,"completed":2,"skipped":899,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:26:23.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 25 11:26:23.585: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 25 11:26:23.637: INFO: Waiting for terminating namespaces to be deleted... Mar 25 11:26:23.735: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 25 11:26:23.871: INFO: pod-handle-http-request from container-lifecycle-hook-4144 started at 2021-03-25 11:26:14 +0000 UTC (1 container statuses recorded) Mar 25 11:26:23.871: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 11:26:23.871: INFO: busybox-50f92596-0685-4fbb-ba11-8acc8326f510 from container-probe-6917 started at 2021-03-25 11:22:32 +0000 UTC (1 container statuses recorded) Mar 25 11:26:23.871: INFO: Container busybox ready: true, restart count 0 Mar 25 11:26:23.872: INFO: kindnet-bpcmh from kube-system started at 2021-03-25 11:19:46 +0000 UTC (1 container statuses recorded) Mar 25 11:26:23.872: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:26:23.872: INFO: kube-proxy-kjrrj from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 11:26:23.872: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:26:23.872: INFO: back-off-cap from pods-708 started at 2021-03-25 11:22:11 +0000 UTC (1 container statuses recorded) Mar 25 11:26:23.872: INFO: Container back-off-cap ready: false, restart count 5 Mar 25 11:26:23.872: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 25 11:26:23.918: INFO: pod-with-prestop-http-hook from container-lifecycle-hook-4144 started at 2021-03-25 11:26:22 +0000 UTC (1 container statuses recorded) Mar 25 11:26:23.918: INFO: Container pod-with-prestop-http-hook ready: false, restart count 0 Mar 25 11:26:23.918: INFO: pvc-volume-tester-gqglb from csi-mock-volumes-7987 started at 2021-03-24 09:41:54 +0000 UTC (1 container statuses recorded) Mar 25 11:26:23.918: INFO: Container volume-tester ready: false, restart count 0 Mar 25 11:26:23.918: INFO: kindnet-7xphn from kube-system started at 2021-03-24 20:36:55 +0000 UTC (1 container statuses recorded) Mar 25 11:26:23.918: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:26:23.918: INFO: kube-proxy-dv4wd from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 11:26:23.918: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:26:23.918: INFO: with-tolerations from sched-pred-7252 started at 2021-03-25 11:26:14 +0000 UTC (1 container statuses recorded) Mar 25 11:26:23.918: INFO: Container with-tolerations ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-7279801e-c5e0-4fe9-b2bd-e40022b1159c 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-7279801e-c5e0-4fe9-b2bd-e40022b1159c off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-7279801e-c5e0-4fe9-b2bd-e40022b1159c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:26:39.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4364" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.423 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":16,"completed":3,"skipped":1881,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:26:39.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 25 11:26:40.728: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 25 11:26:40.894: INFO: Waiting for terminating namespaces to be deleted... Mar 25 11:26:41.278: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 25 11:26:41.359: INFO: pod-handle-http-request from container-lifecycle-hook-4144 started at 2021-03-25 11:26:14 +0000 UTC (1 container statuses recorded) Mar 25 11:26:41.359: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 11:26:41.359: INFO: kindnet-bpcmh from kube-system started at 2021-03-25 11:19:46 +0000 UTC (1 container statuses recorded) Mar 25 11:26:41.359: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:26:41.359: INFO: kube-proxy-kjrrj from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 11:26:41.359: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:26:41.359: INFO: back-off-cap from pods-708 started at 2021-03-25 11:22:11 +0000 UTC (1 container statuses recorded) Mar 25 11:26:41.359: INFO: Container back-off-cap ready: false, restart count 5 Mar 25 11:26:41.359: INFO: with-labels from sched-pred-4364 started at 2021-03-25 11:26:32 +0000 UTC (1 container statuses recorded) Mar 25 11:26:41.359: INFO: Container with-labels ready: true, restart count 0 Mar 25 11:26:41.359: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 25 11:26:41.515: INFO: pod-with-prestop-http-hook from container-lifecycle-hook-4144 started at 2021-03-25 11:26:22 +0000 UTC (1 container statuses recorded) Mar 25 11:26:41.515: INFO: Container pod-with-prestop-http-hook ready: false, restart count 0 Mar 25 11:26:41.515: INFO: pvc-volume-tester-gqglb from csi-mock-volumes-7987 started at 2021-03-24 09:41:54 +0000 UTC (1 container statuses recorded) Mar 25 11:26:41.515: INFO: Container volume-tester ready: false, restart count 0 Mar 25 11:26:41.515: INFO: kindnet-7xphn from kube-system started at 2021-03-24 20:36:55 +0000 UTC (1 container statuses recorded) Mar 25 11:26:41.515: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:26:41.515: INFO: kube-proxy-dv4wd from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 11:26:41.515: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:26:41.515: INFO: with-tolerations from sched-pred-7252 started at 2021-03-25 11:26:14 +0000 UTC (1 container statuses recorded) Mar 25 11:26:41.515: INFO: Container with-tolerations ready: false, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.166f92419dead10c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:26:43.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4681" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":16,"completed":4,"skipped":2062,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:262 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:26:43.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 Mar 25 11:26:44.409: INFO: Waiting up to 1m0s for all nodes to be ready Mar 25 11:27:44.507: INFO: Waiting for terminating namespaces to be deleted... Mar 25 11:27:44.538: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 25 11:27:45.729: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (1 seconds elapsed) Mar 25 11:27:45.729: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 25 11:27:45.729: INFO: ComputeCPUMemFraction for node: latest-worker Mar 25 11:27:45.929: INFO: Pod for on the node: kindnet-bpcmh, Cpu: 100, Mem: 52428800 Mar 25 11:27:45.929: INFO: Pod for on the node: kube-proxy-kjrrj, Cpu: 100, Mem: 209715200 Mar 25 11:27:45.929: INFO: Pod for on the node: back-off-cap, Cpu: 100, Mem: 209715200 Mar 25 11:27:45.929: INFO: Pod for on the node: with-labels, Cpu: 100, Mem: 209715200 Mar 25 11:27:45.929: INFO: Node: latest-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 16000, cpuFraction: 0.0125 Mar 25 11:27:45.929: INFO: Node: latest-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 134922104832, memFraction: 0.0011657570877347874 Mar 25 11:27:45.929: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 25 11:27:46.775: INFO: Pod for on the node: pvc-volume-tester-gqglb, Cpu: 100, Mem: 209715200 Mar 25 11:27:46.775: INFO: Pod for on the node: kindnet-7xphn, Cpu: 100, Mem: 52428800 Mar 25 11:27:46.775: INFO: Pod for on the node: kube-proxy-dv4wd, Cpu: 100, Mem: 209715200 Mar 25 11:27:46.775: INFO: Pod for on the node: pod-update-activedeadlineseconds-c0f69442-69e2-4d81-a457-476f0d78b864, Cpu: 100, Mem: 209715200 Mar 25 11:27:46.775: INFO: Pod for on the node: pod-subpath-test-secret-64nx, Cpu: 100, Mem: 209715200 Mar 25 11:27:46.775: INFO: Pod for on the node: var-expansion-42c4136a-da92-40fe-bfa8-8df88c13f6d2, Cpu: 100, Mem: 209715200 Mar 25 11:27:46.775: INFO: Node: latest-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 16000, cpuFraction: 0.0125 Mar 25 11:27:46.775: INFO: Node: latest-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 134922104832, memFraction: 0.0011657570877347874 [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:262 Mar 25 11:27:46.775: INFO: ComputeCPUMemFraction for node: latest-worker Mar 25 11:27:47.217: INFO: Pod for on the node: kindnet-bpcmh, Cpu: 100, Mem: 52428800 Mar 25 11:27:47.217: INFO: Pod for on the node: kube-proxy-kjrrj, Cpu: 100, Mem: 209715200 Mar 25 11:27:47.217: INFO: Pod for on the node: back-off-cap, Cpu: 100, Mem: 209715200 Mar 25 11:27:47.217: INFO: Pod for on the node: with-labels, Cpu: 100, Mem: 209715200 Mar 25 11:27:47.217: INFO: Node: latest-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 16000, cpuFraction: 0.0125 Mar 25 11:27:47.217: INFO: Node: latest-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 134922104832, memFraction: 0.0011657570877347874 Mar 25 11:27:47.217: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 25 11:27:48.311: INFO: Pod for on the node: pvc-volume-tester-gqglb, Cpu: 100, Mem: 209715200 Mar 25 11:27:48.311: INFO: Pod for on the node: kindnet-7xphn, Cpu: 100, Mem: 52428800 Mar 25 11:27:48.311: INFO: Pod for on the node: kube-proxy-dv4wd, Cpu: 100, Mem: 209715200 Mar 25 11:27:48.311: INFO: Pod for on the node: pod-update-activedeadlineseconds-c0f69442-69e2-4d81-a457-476f0d78b864, Cpu: 100, Mem: 209715200 Mar 25 11:27:48.311: INFO: Pod for on the node: pod-subpath-test-secret-64nx, Cpu: 100, Mem: 209715200 Mar 25 11:27:48.311: INFO: Pod for on the node: var-expansion-42c4136a-da92-40fe-bfa8-8df88c13f6d2, Cpu: 100, Mem: 209715200 Mar 25 11:27:48.311: INFO: Node: latest-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 16000, cpuFraction: 0.0125 Mar 25 11:27:48.311: INFO: Node: latest-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 134922104832, memFraction: 0.0011657570877347874 Mar 25 11:27:50.413: INFO: Waiting for running... Mar 25 11:28:00.592: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Mar 25 11:28:10.643: INFO: ComputeCPUMemFraction for node: latest-worker Mar 25 11:28:10.718: INFO: Pod for on the node: kindnet-bpcmh, Cpu: 100, Mem: 52428800 Mar 25 11:28:10.718: INFO: Pod for on the node: kube-proxy-kjrrj, Cpu: 100, Mem: 209715200 Mar 25 11:28:10.718: INFO: Pod for on the node: back-off-cap, Cpu: 100, Mem: 209715200 Mar 25 11:28:10.718: INFO: Pod for on the node: 5d43e9d4-e2f4-44ad-a798-ba7f1b6ae400-0, Cpu: 7800, Mem: 67316348928 Mar 25 11:28:10.718: INFO: Node: latest-worker, totalRequestedCPUResource: 8000, cpuAllocatableMil: 16000, cpuFraction: 0.5 Mar 25 11:28:10.718: INFO: Node: latest-worker, totalRequestedMemResource: 67473635328, memAllocatableVal: 134922104832, memFraction: 0.5000932605670187 STEP: Compute Cpu, Mem Fraction after create balanced pods. Mar 25 11:28:10.718: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 25 11:28:10.947: INFO: Pod for on the node: pvc-volume-tester-gqglb, Cpu: 100, Mem: 209715200 Mar 25 11:28:10.947: INFO: Pod for on the node: kindnet-7xphn, Cpu: 100, Mem: 52428800 Mar 25 11:28:10.947: INFO: Pod for on the node: kube-proxy-dv4wd, Cpu: 100, Mem: 209715200 Mar 25 11:28:10.947: INFO: Pod for on the node: pod-projected-secrets-053e2fe1-301e-44d8-a556-1c8ba22a72fc, Cpu: 300, Mem: 629145600 Mar 25 11:28:10.947: INFO: Pod for on the node: 605a7a0a-7d9e-4013-8da0-0592fab53fdc-0, Cpu: 7800, Mem: 67316348928 Mar 25 11:28:10.947: INFO: Pod for on the node: var-expansion-42c4136a-da92-40fe-bfa8-8df88c13f6d2, Cpu: 100, Mem: 209715200 Mar 25 11:28:10.947: INFO: Node: latest-worker2, totalRequestedCPUResource: 8000, cpuAllocatableMil: 16000, cpuFraction: 0.5 Mar 25 11:28:10.947: INFO: Node: latest-worker2, totalRequestedMemResource: 67473635328, memAllocatableVal: 134922104832, memFraction: 0.5000932605670187 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-9939 to 1 STEP: Verify the pods should not scheduled to the node: latest-worker STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-9939, will wait for the garbage collector to delete the pods Mar 25 11:28:30.997: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 112.475614ms Mar 25 11:28:32.497: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 1.500687607s [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:30:08.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-9939" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:150 • [SLOW TEST:205.620 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:262 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":16,"completed":5,"skipped":2467,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:30:09.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 25 11:30:09.779: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 25 11:30:09.888: INFO: Waiting for terminating namespaces to be deleted... Mar 25 11:30:09.926: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 25 11:30:09.994: INFO: pod-configmaps-6adcd610-f661-4f1d-abb5-2f621d2bf0f9 from configmap-6498 started at 2021-03-25 11:30:09 +0000 UTC (1 container statuses recorded) Mar 25 11:30:09.994: INFO: Container agnhost-container ready: false, restart count 0 Mar 25 11:30:09.994: INFO: kindnet-bpcmh from kube-system started at 2021-03-25 11:19:46 +0000 UTC (1 container statuses recorded) Mar 25 11:30:09.994: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:30:09.994: INFO: kube-proxy-kjrrj from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 11:30:09.994: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:30:09.994: INFO: back-off-cap from pods-708 started at 2021-03-25 11:22:11 +0000 UTC (1 container statuses recorded) Mar 25 11:30:09.994: INFO: Container back-off-cap ready: false, restart count 6 Mar 25 11:30:09.994: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 25 11:30:10.181: INFO: pvc-volume-tester-gqglb from csi-mock-volumes-7987 started at 2021-03-24 09:41:54 +0000 UTC (1 container statuses recorded) Mar 25 11:30:10.181: INFO: Container volume-tester ready: false, restart count 0 Mar 25 11:30:10.181: INFO: test-rolling-update-deployment-65dc7745-5wfjd from deployment-6952 started at 2021-03-25 11:29:56 +0000 UTC (1 container statuses recorded) Mar 25 11:30:10.181: INFO: Container agnhost ready: true, restart count 0 Mar 25 11:30:10.181: INFO: kindnet-7xphn from kube-system started at 2021-03-24 20:36:55 +0000 UTC (1 container statuses recorded) Mar 25 11:30:10.181: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:30:10.181: INFO: kube-proxy-dv4wd from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 11:30:10.181: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:30:10.181: INFO: pod-secrets-6cd28295-2474-4fb9-91e0-994738879737 from secrets-89 started at 2021-03-25 11:30:09 +0000 UTC (1 container statuses recorded) Mar 25 11:30:10.181: INFO: Container secret-volume-test ready: false, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:30:40.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5692" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:31.240 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":16,"completed":6,"skipped":2962,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:403 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:30:40.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 Mar 25 11:30:41.140: INFO: Waiting up to 1m0s for all nodes to be ready Mar 25 11:31:41.469: INFO: Waiting for terminating namespaces to be deleted... Mar 25 11:31:41.811: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 25 11:31:42.124: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 25 11:31:42.124: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 25 11:31:42.124: INFO: ComputeCPUMemFraction for node: latest-worker Mar 25 11:31:42.287: INFO: Pod for on the node: kindnet-bpcmh, Cpu: 100, Mem: 52428800 Mar 25 11:31:42.287: INFO: Pod for on the node: kube-proxy-kjrrj, Cpu: 100, Mem: 209715200 Mar 25 11:31:42.288: INFO: Pod for on the node: back-off-cap, Cpu: 100, Mem: 209715200 Mar 25 11:31:42.288: INFO: Pod for on the node: pod-projected-secrets-702ef20c-d340-4b46-9fd1-ffbbd16d8974, Cpu: 100, Mem: 209715200 Mar 25 11:31:42.288: INFO: Pod for on the node: rs-e2e-pts-filter-k8gfd, Cpu: 100, Mem: 209715200 Mar 25 11:31:42.288: INFO: Pod for on the node: rs-e2e-pts-filter-mv97g, Cpu: 100, Mem: 209715200 Mar 25 11:31:42.288: INFO: Node: latest-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 16000, cpuFraction: 0.0125 Mar 25 11:31:42.288: INFO: Node: latest-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 134922104832, memFraction: 0.0011657570877347874 Mar 25 11:31:42.288: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 25 11:31:42.428: INFO: Pod for on the node: pvc-volume-tester-gqglb, Cpu: 100, Mem: 209715200 Mar 25 11:31:42.428: INFO: Pod for on the node: kindnet-7xphn, Cpu: 100, Mem: 52428800 Mar 25 11:31:42.428: INFO: Pod for on the node: kube-proxy-dv4wd, Cpu: 100, Mem: 209715200 Mar 25 11:31:42.428: INFO: Pod for on the node: busybox-scheduling-61d4423e-a8e1-4d01-aab6-96861525011f, Cpu: 100, Mem: 209715200 Mar 25 11:31:42.428: INFO: Pod for on the node: pod-adoption, Cpu: 100, Mem: 209715200 Mar 25 11:31:42.428: INFO: Node: latest-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 16000, cpuFraction: 0.0125 Mar 25 11:31:42.428: INFO: Node: latest-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 134922104832, memFraction: 0.0011657570877347874 [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:389 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:403 Mar 25 11:32:02.302: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 25 11:32:02.442: INFO: Pod for on the node: pvc-volume-tester-gqglb, Cpu: 100, Mem: 209715200 Mar 25 11:32:02.442: INFO: Pod for on the node: kindnet-7xphn, Cpu: 100, Mem: 52428800 Mar 25 11:32:02.442: INFO: Pod for on the node: kube-proxy-dv4wd, Cpu: 100, Mem: 209715200 Mar 25 11:32:02.442: INFO: Pod for on the node: busybox-scheduling-61d4423e-a8e1-4d01-aab6-96861525011f, Cpu: 100, Mem: 209715200 Mar 25 11:32:02.442: INFO: Pod for on the node: pod-adoption, Cpu: 100, Mem: 209715200 Mar 25 11:32:02.443: INFO: Node: latest-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 16000, cpuFraction: 0.0125 Mar 25 11:32:02.443: INFO: Node: latest-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 134922104832, memFraction: 0.0011657570877347874 Mar 25 11:32:02.443: INFO: ComputeCPUMemFraction for node: latest-worker Mar 25 11:32:02.487: INFO: Pod for on the node: pod-configmaps-151d842d-3e92-4f49-a9f1-886bf488de3f, Cpu: 100, Mem: 209715200 Mar 25 11:32:02.487: INFO: Pod for on the node: kindnet-bpcmh, Cpu: 100, Mem: 52428800 Mar 25 11:32:02.487: INFO: Pod for on the node: kube-proxy-kjrrj, Cpu: 100, Mem: 209715200 Mar 25 11:32:02.487: INFO: Pod for on the node: back-off-cap, Cpu: 100, Mem: 209715200 Mar 25 11:32:02.487: INFO: Node: latest-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 16000, cpuFraction: 0.0125 Mar 25 11:32:02.487: INFO: Node: latest-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 134922104832, memFraction: 0.0011657570877347874 Mar 25 11:32:02.621: INFO: Waiting for running... Mar 25 11:32:07.798: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Mar 25 11:32:12.849: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 25 11:32:12.916: INFO: Pod for on the node: pvc-volume-tester-gqglb, Cpu: 100, Mem: 209715200 Mar 25 11:32:12.916: INFO: Pod for on the node: kindnet-7xphn, Cpu: 100, Mem: 52428800 Mar 25 11:32:12.916: INFO: Pod for on the node: kube-proxy-dv4wd, Cpu: 100, Mem: 209715200 Mar 25 11:32:12.916: INFO: Pod for on the node: busybox-scheduling-61d4423e-a8e1-4d01-aab6-96861525011f, Cpu: 100, Mem: 209715200 Mar 25 11:32:12.916: INFO: Pod for on the node: pod-adoption, Cpu: 100, Mem: 209715200 Mar 25 11:32:12.916: INFO: Pod for on the node: 71a59b10-9e5d-46f8-8bb2-9c3fcef8a41b-0, Cpu: 7800, Mem: 67316348928 Mar 25 11:32:12.916: INFO: Node: latest-worker2, totalRequestedCPUResource: 8000, cpuAllocatableMil: 16000, cpuFraction: 0.5 Mar 25 11:32:12.916: INFO: Node: latest-worker2, totalRequestedMemResource: 67473635328, memAllocatableVal: 134922104832, memFraction: 0.5000932605670187 STEP: Compute Cpu, Mem Fraction after create balanced pods. Mar 25 11:32:12.916: INFO: ComputeCPUMemFraction for node: latest-worker Mar 25 11:32:13.133: INFO: Pod for on the node: pod-configmaps-151d842d-3e92-4f49-a9f1-886bf488de3f, Cpu: 100, Mem: 209715200 Mar 25 11:32:13.133: INFO: Pod for on the node: kindnet-bpcmh, Cpu: 100, Mem: 52428800 Mar 25 11:32:13.134: INFO: Pod for on the node: kube-proxy-kjrrj, Cpu: 100, Mem: 209715200 Mar 25 11:32:13.134: INFO: Pod for on the node: back-off-cap, Cpu: 100, Mem: 209715200 Mar 25 11:32:13.134: INFO: Pod for on the node: 9d1a4a44-6a64-4ce5-9f34-ea3083e33042-0, Cpu: 7800, Mem: 67316348928 Mar 25 11:32:13.134: INFO: Pod for on the node: sample-webhook-deployment-8977db-zz4vc, Cpu: 100, Mem: 209715200 Mar 25 11:32:13.134: INFO: Node: latest-worker, totalRequestedCPUResource: 8000, cpuAllocatableMil: 16000, cpuFraction: 0.5 Mar 25 11:32:13.134: INFO: Node: latest-worker, totalRequestedMemResource: 67473635328, memAllocatableVal: 134922104832, memFraction: 0.5000932605670187 STEP: Run a ReplicaSet with 4 replicas on node "latest-worker2" STEP: Verifying if the test-pod lands on node "latest-worker" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:397 STEP: removing the label kubernetes.io/e2e-pts-score off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:33:17.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-4192" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:150 • [SLOW TEST:156.830 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:385 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:403 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":16,"completed":7,"skipped":3972,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:72 [BeforeEach] [sig-scheduling] Multi-AZ Clusters /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:33:17.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename multi-az STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] Multi-AZ Clusters /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:47 Mar 25 11:33:17.720: INFO: Only supported for providers [gce gke aws] (not local) [AfterEach] [sig-scheduling] Multi-AZ Clusters /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:33:17.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "multi-az-3843" for this suite. [AfterEach] [sig-scheduling] Multi-AZ Clusters /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:67 S [SKIPPING] in Spec Setup (BeforeEach) [0.663 seconds] [sig-scheduling] Multi-AZ Clusters /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should spread the pods of a service across zones [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:72 Only supported for providers [gce gke aws] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/ubernetes_lite.go:48 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:33:18.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 25 11:33:18.235: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 25 11:33:18.315: INFO: Waiting for terminating namespaces to be deleted... Mar 25 11:33:18.355: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 25 11:33:18.444: INFO: pod-configmaps-151d842d-3e92-4f49-a9f1-886bf488de3f from configmap-2541 started at 2021-03-25 11:31:44 +0000 UTC (1 container statuses recorded) Mar 25 11:33:18.444: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 11:33:18.444: INFO: liveness-37bfc647-b94b-4a1f-b226-219b08e16c1a from container-probe-7765 started at 2021-03-25 11:32:45 +0000 UTC (1 container statuses recorded) Mar 25 11:33:18.444: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 11:33:18.444: INFO: kindnet-bpcmh from kube-system started at 2021-03-25 11:19:46 +0000 UTC (1 container statuses recorded) Mar 25 11:33:18.444: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:33:18.444: INFO: kube-proxy-kjrrj from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 11:33:18.444: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:33:18.444: INFO: back-off-cap from pods-708 started at 2021-03-25 11:22:11 +0000 UTC (1 container statuses recorded) Mar 25 11:33:18.444: INFO: Container back-off-cap ready: false, restart count 6 Mar 25 11:33:18.444: INFO: test-pod from sched-priority-4192 started at 2021-03-25 11:32:25 +0000 UTC (1 container statuses recorded) Mar 25 11:33:18.444: INFO: Container test-pod ready: true, restart count 0 Mar 25 11:33:18.444: INFO: affinity-nodeport-transition-4wgwk from services-5242 started at 2021-03-25 11:33:17 +0000 UTC (1 container statuses recorded) Mar 25 11:33:18.444: INFO: Container affinity-nodeport-transition ready: false, restart count 0 Mar 25 11:33:18.444: INFO: affinity-nodeport-transition-qxbtn from services-5242 started at 2021-03-25 11:33:17 +0000 UTC (1 container statuses recorded) Mar 25 11:33:18.444: INFO: Container affinity-nodeport-transition ready: false, restart count 0 Mar 25 11:33:18.444: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 25 11:33:18.544: INFO: pvc-volume-tester-gqglb from csi-mock-volumes-7987 started at 2021-03-24 09:41:54 +0000 UTC (1 container statuses recorded) Mar 25 11:33:18.544: INFO: Container volume-tester ready: false, restart count 0 Mar 25 11:33:18.544: INFO: kindnet-7xphn from kube-system started at 2021-03-24 20:36:55 +0000 UTC (1 container statuses recorded) Mar 25 11:33:18.544: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:33:18.544: INFO: kube-proxy-dv4wd from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 11:33:18.544: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:33:18.544: INFO: rs-e2e-pts-score-7t8jw from sched-priority-4192 started at 2021-03-25 11:32:13 +0000 UTC (1 container statuses recorded) Mar 25 11:33:18.544: INFO: Container e2e-pts-score ready: true, restart count 0 Mar 25 11:33:18.544: INFO: rs-e2e-pts-score-nvmjn from sched-priority-4192 started at 2021-03-25 11:32:13 +0000 UTC (1 container statuses recorded) Mar 25 11:33:18.544: INFO: Container e2e-pts-score ready: true, restart count 0 Mar 25 11:33:18.544: INFO: rs-e2e-pts-score-rcrv4 from sched-priority-4192 started at 2021-03-25 11:32:13 +0000 UTC (1 container statuses recorded) Mar 25 11:33:18.544: INFO: Container e2e-pts-score ready: true, restart count 0 Mar 25 11:33:18.544: INFO: rs-e2e-pts-score-wf5ff from sched-priority-4192 started at 2021-03-25 11:32:13 +0000 UTC (1 container statuses recorded) Mar 25 11:33:18.544: INFO: Container e2e-pts-score ready: true, restart count 0 Mar 25 11:33:18.544: INFO: affinity-nodeport-transition-cvnj6 from services-5242 started at 2021-03-25 11:33:17 +0000 UTC (1 container statuses recorded) Mar 25 11:33:18.544: INFO: Container affinity-nodeport-transition ready: false, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-9f56222d-7524-454d-8904-e5386cda8b3f=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-339e3816-b254-4350-bd32-44519aa22266 testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.166f929e0bc53993], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4220/without-toleration to latest-worker2] STEP: Considering event: Type = [Normal], Name = [without-toleration.166f929f02887a38], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-toleration.166f929fb1f5b010], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.166f929fc64156c1], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.166f92a049b6b7dd], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.166f92a1243ddbf3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity, 1 node(s) had taint {kubernetes.io/e2e-taint-key-9f56222d-7524-454d-8904-e5386cda8b3f: testing-taint-value}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.166f92a1243ddbf3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity, 1 node(s) had taint {kubernetes.io/e2e-taint-key-9f56222d-7524-454d-8904-e5386cda8b3f: testing-taint-value}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.166f929e0bc53993], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4220/without-toleration to latest-worker2] STEP: Considering event: Type = [Normal], Name = [without-toleration.166f929f02887a38], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-toleration.166f929fb1f5b010], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.166f929fc64156c1], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.166f92a049b6b7dd], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-9f56222d-7524-454d-8904-e5386cda8b3f=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.166f92a1cb4ff190], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4220/still-no-tolerations to latest-worker2] STEP: removing the label kubernetes.io/e2e-label-key-339e3816-b254-4350-bd32-44519aa22266 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-339e3816-b254-4350-bd32-44519aa22266 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-9f56222d-7524-454d-8904-e5386cda8b3f=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:33:35.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4220" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:17.655 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":16,"completed":8,"skipped":4157,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] run Nvidia GPU Device Plugin tests with a recreation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/nvidia-gpus.go:325 [BeforeEach] [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/nvidia-gpus.go:321 Mar 25 11:33:35.728: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 S [SKIPPING] in Spec Setup (BeforeEach) [0.001 seconds] [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 run Nvidia GPU Device Plugin tests with a recreation [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/nvidia-gpus.go:325 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/nvidia-gpus.go:322 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:326 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:33:35.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 Mar 25 11:33:36.068: INFO: Waiting up to 1m0s for all nodes to be ready Mar 25 11:34:36.124: INFO: Waiting for terminating namespaces to be deleted... Mar 25 11:34:36.327: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 25 11:34:36.496: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 25 11:34:36.497: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 25 11:34:36.497: INFO: ComputeCPUMemFraction for node: latest-worker Mar 25 11:34:36.523: INFO: Pod for on the node: liveness-37bfc647-b94b-4a1f-b226-219b08e16c1a, Cpu: 100, Mem: 209715200 Mar 25 11:34:36.523: INFO: Pod for on the node: kindnet-bpcmh, Cpu: 100, Mem: 52428800 Mar 25 11:34:36.523: INFO: Pod for on the node: kube-proxy-kjrrj, Cpu: 100, Mem: 209715200 Mar 25 11:34:36.523: INFO: Pod for on the node: back-off-cap, Cpu: 100, Mem: 209715200 Mar 25 11:34:36.523: INFO: Pod for on the node: affinity-nodeport-transition-4wgwk, Cpu: 100, Mem: 209715200 Mar 25 11:34:36.523: INFO: Pod for on the node: affinity-nodeport-transition-qxbtn, Cpu: 100, Mem: 209715200 Mar 25 11:34:36.523: INFO: Pod for on the node: execpod-affinity72ht5, Cpu: 100, Mem: 209715200 Mar 25 11:34:36.523: INFO: Node: latest-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 16000, cpuFraction: 0.0125 Mar 25 11:34:36.523: INFO: Node: latest-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 134922104832, memFraction: 0.0011657570877347874 Mar 25 11:34:36.523: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 25 11:34:36.563: INFO: Pod for on the node: pvc-volume-tester-gqglb, Cpu: 100, Mem: 209715200 Mar 25 11:34:36.563: INFO: Pod for on the node: kindnet-7xphn, Cpu: 100, Mem: 52428800 Mar 25 11:34:36.563: INFO: Pod for on the node: kube-proxy-dv4wd, Cpu: 100, Mem: 209715200 Mar 25 11:34:36.563: INFO: Pod for on the node: affinity-nodeport-transition-cvnj6, Cpu: 100, Mem: 209715200 Mar 25 11:34:36.564: INFO: Node: latest-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 16000, cpuFraction: 0.0125 Mar 25 11:34:36.564: INFO: Node: latest-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 134922104832, memFraction: 0.0011657570877347874 [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:326 Mar 25 11:34:36.564: INFO: ComputeCPUMemFraction for node: latest-worker Mar 25 11:34:36.641: INFO: Pod for on the node: liveness-37bfc647-b94b-4a1f-b226-219b08e16c1a, Cpu: 100, Mem: 209715200 Mar 25 11:34:36.641: INFO: Pod for on the node: kindnet-bpcmh, Cpu: 100, Mem: 52428800 Mar 25 11:34:36.641: INFO: Pod for on the node: kube-proxy-kjrrj, Cpu: 100, Mem: 209715200 Mar 25 11:34:36.641: INFO: Pod for on the node: back-off-cap, Cpu: 100, Mem: 209715200 Mar 25 11:34:36.641: INFO: Pod for on the node: affinity-nodeport-transition-4wgwk, Cpu: 100, Mem: 209715200 Mar 25 11:34:36.641: INFO: Pod for on the node: affinity-nodeport-transition-qxbtn, Cpu: 100, Mem: 209715200 Mar 25 11:34:36.641: INFO: Pod for on the node: execpod-affinity72ht5, Cpu: 100, Mem: 209715200 Mar 25 11:34:36.641: INFO: Node: latest-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 16000, cpuFraction: 0.0125 Mar 25 11:34:36.641: INFO: Node: latest-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 134922104832, memFraction: 0.0011657570877347874 Mar 25 11:34:36.641: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 25 11:34:36.656: INFO: Pod for on the node: pvc-volume-tester-gqglb, Cpu: 100, Mem: 209715200 Mar 25 11:34:36.656: INFO: Pod for on the node: kindnet-7xphn, Cpu: 100, Mem: 52428800 Mar 25 11:34:36.656: INFO: Pod for on the node: kube-proxy-dv4wd, Cpu: 100, Mem: 209715200 Mar 25 11:34:36.656: INFO: Pod for on the node: affinity-nodeport-transition-cvnj6, Cpu: 100, Mem: 209715200 Mar 25 11:34:36.656: INFO: Node: latest-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 16000, cpuFraction: 0.0125 Mar 25 11:34:36.656: INFO: Node: latest-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 134922104832, memFraction: 0.0011657570877347874 Mar 25 11:34:36.833: INFO: Waiting for running... Mar 25 11:34:41.950: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Mar 25 11:34:47.001: INFO: ComputeCPUMemFraction for node: latest-worker Mar 25 11:34:47.024: INFO: Pod for on the node: liveness-37bfc647-b94b-4a1f-b226-219b08e16c1a, Cpu: 100, Mem: 209715200 Mar 25 11:34:47.024: INFO: Pod for on the node: kindnet-bpcmh, Cpu: 100, Mem: 52428800 Mar 25 11:34:47.024: INFO: Pod for on the node: kube-proxy-kjrrj, Cpu: 100, Mem: 209715200 Mar 25 11:34:47.024: INFO: Pod for on the node: back-off-cap, Cpu: 100, Mem: 209715200 Mar 25 11:34:47.024: INFO: Pod for on the node: 40802e64-f3c8-4b1d-842a-cd72d88fa5bd-0, Cpu: 7800, Mem: 67316348928 Mar 25 11:34:47.024: INFO: Pod for on the node: affinity-nodeport-transition-4wgwk, Cpu: 100, Mem: 209715200 Mar 25 11:34:47.024: INFO: Pod for on the node: affinity-nodeport-transition-qxbtn, Cpu: 100, Mem: 209715200 Mar 25 11:34:47.024: INFO: Pod for on the node: execpod-affinity72ht5, Cpu: 100, Mem: 209715200 Mar 25 11:34:47.024: INFO: Node: latest-worker, totalRequestedCPUResource: 8000, cpuAllocatableMil: 16000, cpuFraction: 0.5 Mar 25 11:34:47.024: INFO: Node: latest-worker, totalRequestedMemResource: 67473635328, memAllocatableVal: 134922104832, memFraction: 0.5000932605670187 STEP: Compute Cpu, Mem Fraction after create balanced pods. Mar 25 11:34:47.024: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 25 11:34:47.040: INFO: Pod for on the node: pvc-volume-tester-gqglb, Cpu: 100, Mem: 209715200 Mar 25 11:34:47.040: INFO: Pod for on the node: kindnet-7xphn, Cpu: 100, Mem: 52428800 Mar 25 11:34:47.040: INFO: Pod for on the node: kube-proxy-dv4wd, Cpu: 100, Mem: 209715200 Mar 25 11:34:47.040: INFO: Pod for on the node: 9190aae2-13ff-453f-884f-d8cb9d562308-0, Cpu: 7800, Mem: 67316348928 Mar 25 11:34:47.040: INFO: Pod for on the node: affinity-nodeport-transition-cvnj6, Cpu: 100, Mem: 209715200 Mar 25 11:34:47.040: INFO: Node: latest-worker2, totalRequestedCPUResource: 8000, cpuAllocatableMil: 16000, cpuFraction: 0.5 Mar 25 11:34:47.040: INFO: Node: latest-worker2, totalRequestedMemResource: 67473635328, memAllocatableVal: 134922104832, memFraction: 0.5000932605670187 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b8e5553a-e20d-45d2-9aaf=testing-taint-value-933e3d1b-419f-41ee-930d-265d33effcd9:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-3d95f340-d74f-452e-b1ce=testing-taint-value-1cecb833-a034-4e16-9340-c85071beb050:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-02624c32-2f78-48e7-9db4=testing-taint-value-60242846-7b13-44c5-9ebe-9cff1305de5b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-042ea486-701e-4b6e-9aa0=testing-taint-value-af8dbbfe-e2ec-46aa-a75e-c35217a7643b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c006ba8b-2ea1-43e3-b959=testing-taint-value-a964c7f2-5c73-46a5-8a8a-9d09338d581a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-af2caa97-e9f1-4f6e-8f45=testing-taint-value-21e2d0d1-f874-40f6-b7e9-b397f7eb59ae:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-5645b2a2-a761-4e47-8c5b=testing-taint-value-0f8d9079-5f4b-4e94-b3c8-86674f1780de:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-88ae9e7e-95b5-4f39-8f4b=testing-taint-value-654e0b32-d3da-4974-9b79-552710938941:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c95246db-e3eb-408c-9451=testing-taint-value-15bf61f1-ae9d-4c5e-a431-e2ed6e7014b6:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-8adf8419-f1df-437b-97de=testing-taint-value-e7bd40f7-93c6-4bba-b5d1-4e7abd6e2e1a:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-6b9ed442-5528-4b69-92b1=testing-taint-value-057f28fe-6807-4062-b5dd-038fa19d1256:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b641ac86-5f15-49fa-82f7=testing-taint-value-f52a008c-96e1-40a5-a3b3-fe5c0d621627:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-9b6ab8bf-d90a-4f56-9ac0=testing-taint-value-db911901-abee-4f0f-b1f8-e3f5d46cd579:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-9244f218-002b-40be-883b=testing-taint-value-6570ea9c-e84e-4c4b-8d3b-5e2932a45b43:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-127975f2-777e-4819-8d39=testing-taint-value-00c4b378-6368-4e59-93f2-f599305ff11d:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-6e69d26d-0af8-41b8-bb73=testing-taint-value-4e1e895c-bde6-4f58-9fb2-74c7e68249e7:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-06d27a6c-0c1a-4c1a-abe3=testing-taint-value-016b61b5-551f-48d2-8fbf-5c81e13851c1:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-92eac00b-b3a7-4a65-9597=testing-taint-value-8080c865-2d3e-45fe-8f0d-979504f3f674:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-38575e9b-5b51-44c3-8f46=testing-taint-value-d65ae660-5498-4d66-bc82-14c92481da01:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-6d46cbd4-f356-4b5f-80fd=testing-taint-value-c6405ad7-64b2-4e93-a862-7e687aa543a7:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-6b9ed442-5528-4b69-92b1=testing-taint-value-057f28fe-6807-4062-b5dd-038fa19d1256:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b641ac86-5f15-49fa-82f7=testing-taint-value-f52a008c-96e1-40a5-a3b3-fe5c0d621627:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-9b6ab8bf-d90a-4f56-9ac0=testing-taint-value-db911901-abee-4f0f-b1f8-e3f5d46cd579:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-9244f218-002b-40be-883b=testing-taint-value-6570ea9c-e84e-4c4b-8d3b-5e2932a45b43:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-127975f2-777e-4819-8d39=testing-taint-value-00c4b378-6368-4e59-93f2-f599305ff11d:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-6e69d26d-0af8-41b8-bb73=testing-taint-value-4e1e895c-bde6-4f58-9fb2-74c7e68249e7:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-06d27a6c-0c1a-4c1a-abe3=testing-taint-value-016b61b5-551f-48d2-8fbf-5c81e13851c1:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-92eac00b-b3a7-4a65-9597=testing-taint-value-8080c865-2d3e-45fe-8f0d-979504f3f674:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-38575e9b-5b51-44c3-8f46=testing-taint-value-d65ae660-5498-4d66-bc82-14c92481da01:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-6d46cbd4-f356-4b5f-80fd=testing-taint-value-c6405ad7-64b2-4e93-a862-7e687aa543a7:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b8e5553a-e20d-45d2-9aaf=testing-taint-value-933e3d1b-419f-41ee-930d-265d33effcd9:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-3d95f340-d74f-452e-b1ce=testing-taint-value-1cecb833-a034-4e16-9340-c85071beb050:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-02624c32-2f78-48e7-9db4=testing-taint-value-60242846-7b13-44c5-9ebe-9cff1305de5b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-042ea486-701e-4b6e-9aa0=testing-taint-value-af8dbbfe-e2ec-46aa-a75e-c35217a7643b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c006ba8b-2ea1-43e3-b959=testing-taint-value-a964c7f2-5c73-46a5-8a8a-9d09338d581a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-af2caa97-e9f1-4f6e-8f45=testing-taint-value-21e2d0d1-f874-40f6-b7e9-b397f7eb59ae:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-5645b2a2-a761-4e47-8c5b=testing-taint-value-0f8d9079-5f4b-4e94-b3c8-86674f1780de:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-88ae9e7e-95b5-4f39-8f4b=testing-taint-value-654e0b32-d3da-4974-9b79-552710938941:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c95246db-e3eb-408c-9451=testing-taint-value-15bf61f1-ae9d-4c5e-a431-e2ed6e7014b6:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-8adf8419-f1df-437b-97de=testing-taint-value-e7bd40f7-93c6-4bba-b5d1-4e7abd6e2e1a:PreferNoSchedule Mar 25 11:35:57.854: INFO: Failed to wait until all memory balanced pods are deleted: timed out waiting for the condition. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:35:57.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-1548" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:150 • [SLOW TEST:142.203 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:326 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":16,"completed":9,"skipped":4528,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:178 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:35:57.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 Mar 25 11:35:58.339: INFO: Waiting up to 1m0s for all nodes to be ready Mar 25 11:36:58.520: INFO: Waiting for terminating namespaces to be deleted... Mar 25 11:36:58.623: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 25 11:36:58.848: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 25 11:36:58.848: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 25 11:36:58.848: INFO: ComputeCPUMemFraction for node: latest-worker Mar 25 11:36:58.880: INFO: Pod for on the node: kindnet-bpcmh, Cpu: 100, Mem: 52428800 Mar 25 11:36:58.880: INFO: Pod for on the node: kube-proxy-kjrrj, Cpu: 100, Mem: 209715200 Mar 25 11:36:58.880: INFO: Pod for on the node: logs-generator, Cpu: 100, Mem: 209715200 Mar 25 11:36:58.880: INFO: Pod for on the node: back-off-cap, Cpu: 100, Mem: 209715200 Mar 25 11:36:58.880: INFO: Pod for on the node: with-tolerations, Cpu: 100, Mem: 209715200 Mar 25 11:36:58.880: INFO: Pod for on the node: ss-0, Cpu: 100, Mem: 209715200 Mar 25 11:36:58.880: INFO: Pod for on the node: test-pod, Cpu: 100, Mem: 209715200 Mar 25 11:36:58.880: INFO: Node: latest-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 16000, cpuFraction: 0.0125 Mar 25 11:36:58.880: INFO: Node: latest-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 134922104832, memFraction: 0.0011657570877347874 Mar 25 11:36:58.880: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 25 11:36:58.899: INFO: Pod for on the node: pvc-volume-tester-gqglb, Cpu: 100, Mem: 209715200 Mar 25 11:36:58.899: INFO: Pod for on the node: kindnet-7xphn, Cpu: 100, Mem: 52428800 Mar 25 11:36:58.899: INFO: Pod for on the node: kube-proxy-dv4wd, Cpu: 100, Mem: 209715200 Mar 25 11:36:58.899: INFO: Pod for on the node: e2e-dns-scale-records-d8290d30-c05e-4e71-8cae-64a5f6a095bf, Cpu: 100, Mem: 209715200 Mar 25 11:36:58.899: INFO: Pod for on the node: server-envvars-abbf8db4-16f0-4b43-8cb1-28543f2b7bfd, Cpu: 100, Mem: 209715200 Mar 25 11:36:58.899: INFO: Node: latest-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 16000, cpuFraction: 0.0125 Mar 25 11:36:58.899: INFO: Node: latest-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 134922104832, memFraction: 0.0011657570877347874 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:178 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname Mar 25 11:37:05.210: INFO: ComputeCPUMemFraction for node: latest-worker Mar 25 11:37:05.219: INFO: Pod for on the node: kindnet-bpcmh, Cpu: 100, Mem: 52428800 Mar 25 11:37:05.219: INFO: Pod for on the node: kube-proxy-kjrrj, Cpu: 100, Mem: 209715200 Mar 25 11:37:05.219: INFO: Pod for on the node: logs-generator, Cpu: 100, Mem: 209715200 Mar 25 11:37:05.219: INFO: Pod for on the node: back-off-cap, Cpu: 100, Mem: 209715200 Mar 25 11:37:05.219: INFO: Pod for on the node: with-tolerations, Cpu: 100, Mem: 209715200 Mar 25 11:37:05.219: INFO: Pod for on the node: ss-0, Cpu: 100, Mem: 209715200 Mar 25 11:37:05.219: INFO: Node: latest-worker, totalRequestedCPUResource: 200, cpuAllocatableMil: 16000, cpuFraction: 0.0125 Mar 25 11:37:05.219: INFO: Node: latest-worker, totalRequestedMemResource: 157286400, memAllocatableVal: 134922104832, memFraction: 0.0011657570877347874 Mar 25 11:37:05.219: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 25 11:37:05.225: INFO: Pod for on the node: pvc-volume-tester-gqglb, Cpu: 100, Mem: 209715200 Mar 25 11:37:05.225: INFO: Pod for on the node: kindnet-7xphn, Cpu: 100, Mem: 52428800 Mar 25 11:37:05.225: INFO: Pod for on the node: kube-proxy-dv4wd, Cpu: 100, Mem: 209715200 Mar 25 11:37:05.225: INFO: Pod for on the node: e2e-dns-scale-records-d8290d30-c05e-4e71-8cae-64a5f6a095bf, Cpu: 100, Mem: 209715200 Mar 25 11:37:05.225: INFO: Pod for on the node: server-envvars-abbf8db4-16f0-4b43-8cb1-28543f2b7bfd, Cpu: 100, Mem: 209715200 Mar 25 11:37:05.225: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Mar 25 11:37:05.225: INFO: Node: latest-worker2, totalRequestedCPUResource: 200, cpuAllocatableMil: 16000, cpuFraction: 0.0125 Mar 25 11:37:05.225: INFO: Node: latest-worker2, totalRequestedMemResource: 157286400, memAllocatableVal: 134922104832, memFraction: 0.0011657570877347874 Mar 25 11:37:05.251: INFO: Waiting for running... Mar 25 11:37:15.362: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Mar 25 11:37:25.414: INFO: ComputeCPUMemFraction for node: latest-worker Mar 25 11:37:25.601: INFO: Pod for on the node: kindnet-bpcmh, Cpu: 100, Mem: 52428800 Mar 25 11:37:25.601: INFO: Pod for on the node: kube-proxy-kjrrj, Cpu: 100, Mem: 209715200 Mar 25 11:37:25.601: INFO: Pod for on the node: back-off-cap, Cpu: 100, Mem: 209715200 Mar 25 11:37:25.601: INFO: Pod for on the node: e7b41e53-70c8-4a9b-bb6b-98af458a56ef-0, Cpu: 9400, Mem: 80808559411 Mar 25 11:37:25.601: INFO: Pod for on the node: ss-0, Cpu: 100, Mem: 209715200 Mar 25 11:37:25.601: INFO: Node: latest-worker, totalRequestedCPUResource: 9600, cpuAllocatableMil: 16000, cpuFraction: 0.6 Mar 25 11:37:25.601: INFO: Node: latest-worker, totalRequestedMemResource: 80965845811, memAllocatableVal: 134922104832, memFraction: 0.6000932605655365 STEP: Compute Cpu, Mem Fraction after create balanced pods. Mar 25 11:37:25.601: INFO: ComputeCPUMemFraction for node: latest-worker2 Mar 25 11:37:25.625: INFO: Pod for on the node: pvc-volume-tester-gqglb, Cpu: 100, Mem: 209715200 Mar 25 11:37:25.626: INFO: Pod for on the node: kindnet-7xphn, Cpu: 100, Mem: 52428800 Mar 25 11:37:25.626: INFO: Pod for on the node: kube-proxy-dv4wd, Cpu: 100, Mem: 209715200 Mar 25 11:37:25.626: INFO: Pod for on the node: agnhost-primary-vsjzj, Cpu: 100, Mem: 209715200 Mar 25 11:37:25.626: INFO: Pod for on the node: e2e-dns-scale-records-d8290d30-c05e-4e71-8cae-64a5f6a095bf, Cpu: 100, Mem: 209715200 Mar 25 11:37:25.626: INFO: Pod for on the node: 04764f37-0ab7-4d58-ad7f-a7a2b15fc2e6-0, Cpu: 9400, Mem: 80808559411 Mar 25 11:37:25.626: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Mar 25 11:37:25.626: INFO: Node: latest-worker2, totalRequestedCPUResource: 9600, cpuAllocatableMil: 16000, cpuFraction: 0.6 Mar 25 11:37:25.626: INFO: Node: latest-worker2, totalRequestedMemResource: 80965845811, memAllocatableVal: 134922104832, memFraction: 0.6000932605655365 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:38:26.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-2092" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:150 • [SLOW TEST:151.810 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:178 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":16,"completed":10,"skipped":4569,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:38:29.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 25 11:38:30.758: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 25 11:38:30.849: INFO: Waiting for terminating namespaces to be deleted... Mar 25 11:38:30.910: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 25 11:38:30.961: INFO: kindnet-bpcmh from kube-system started at 2021-03-25 11:19:46 +0000 UTC (1 container statuses recorded) Mar 25 11:38:30.961: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:38:30.961: INFO: kube-proxy-kjrrj from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 11:38:30.961: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:38:30.961: INFO: hostexec-latest-worker-8c4pd from persistent-local-volumes-test-9173 started at 2021-03-25 11:38:25 +0000 UTC (1 container statuses recorded) Mar 25 11:38:30.961: INFO: Container agnhost-container ready: false, restart count 0 Mar 25 11:38:30.961: INFO: back-off-cap from pods-708 started at 2021-03-25 11:22:11 +0000 UTC (1 container statuses recorded) Mar 25 11:38:30.961: INFO: Container back-off-cap ready: false, restart count 7 Mar 25 11:38:30.961: INFO: pod-with-pod-antiaffinity from sched-priority-2092 started at 2021-03-25 11:37:25 +0000 UTC (1 container statuses recorded) Mar 25 11:38:30.961: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 Mar 25 11:38:30.961: INFO: affinity-nodeport-4mf74 from services-507 started at 2021-03-25 11:37:54 +0000 UTC (1 container statuses recorded) Mar 25 11:38:30.961: INFO: Container affinity-nodeport ready: true, restart count 0 Mar 25 11:38:30.961: INFO: execpod-affinityrhj5l from services-507 started at 2021-03-25 11:38:04 +0000 UTC (1 container statuses recorded) Mar 25 11:38:30.961: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 11:38:30.961: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 25 11:38:31.000: INFO: pvc-volume-tester-gqglb from csi-mock-volumes-7987 started at 2021-03-24 09:41:54 +0000 UTC (1 container statuses recorded) Mar 25 11:38:31.000: INFO: Container volume-tester ready: false, restart count 0 Mar 25 11:38:31.000: INFO: test-recreate-deployment-546b5fd69c-2954m from deployment-117 started at 2021-03-25 11:38:30 +0000 UTC (1 container statuses recorded) Mar 25 11:38:31.000: INFO: Container agnhost ready: false, restart count 0 Mar 25 11:38:31.000: INFO: kindnet-7xphn from kube-system started at 2021-03-24 20:36:55 +0000 UTC (1 container statuses recorded) Mar 25 11:38:31.000: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:38:31.000: INFO: kube-proxy-dv4wd from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 11:38:31.000: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:38:31.000: INFO: pod-with-label-security-s1 from sched-priority-2092 started at 2021-03-25 11:36:59 +0000 UTC (1 container statuses recorded) Mar 25 11:38:31.000: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 Mar 25 11:38:31.000: INFO: affinity-nodeport-w5pbb from services-507 started at 2021-03-25 11:37:54 +0000 UTC (1 container statuses recorded) Mar 25 11:38:31.000: INFO: Container affinity-nodeport ready: true, restart count 0 Mar 25 11:38:31.000: INFO: affinity-nodeport-wbfzs from services-507 started at 2021-03-25 11:37:54 +0000 UTC (1 container statuses recorded) Mar 25 11:38:31.000: INFO: Container affinity-nodeport ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 Mar 25 11:38:31.125: INFO: Pod test-recreate-deployment-546b5fd69c-2954m requesting local ephemeral resource =0 on Node latest-worker2 Mar 25 11:38:31.125: INFO: Pod kindnet-7xphn requesting local ephemeral resource =0 on Node latest-worker2 Mar 25 11:38:31.125: INFO: Pod kindnet-bpcmh requesting local ephemeral resource =0 on Node latest-worker Mar 25 11:38:31.125: INFO: Pod kube-proxy-dv4wd requesting local ephemeral resource =0 on Node latest-worker2 Mar 25 11:38:31.125: INFO: Pod kube-proxy-kjrrj requesting local ephemeral resource =0 on Node latest-worker Mar 25 11:38:31.125: INFO: Pod hostexec-latest-worker-8c4pd requesting local ephemeral resource =0 on Node latest-worker Mar 25 11:38:31.125: INFO: Pod back-off-cap requesting local ephemeral resource =0 on Node latest-worker Mar 25 11:38:31.125: INFO: Pod pod-with-label-security-s1 requesting local ephemeral resource =0 on Node latest-worker2 Mar 25 11:38:31.125: INFO: Pod pod-with-pod-antiaffinity requesting local ephemeral resource =0 on Node latest-worker Mar 25 11:38:31.125: INFO: Pod affinity-nodeport-4mf74 requesting local ephemeral resource =0 on Node latest-worker Mar 25 11:38:31.125: INFO: Pod affinity-nodeport-w5pbb requesting local ephemeral resource =0 on Node latest-worker2 Mar 25 11:38:31.125: INFO: Pod affinity-nodeport-wbfzs requesting local ephemeral resource =0 on Node latest-worker2 Mar 25 11:38:31.125: INFO: Pod execpod-affinityrhj5l requesting local ephemeral resource =0 on Node latest-worker Mar 25 11:38:31.125: INFO: Using pod capacity: 235846652313 Mar 25 11:38:31.125: INFO: Node: latest-worker2 has local ephemeral resource allocatable: 2358466523136 Mar 25 11:38:31.125: INFO: Node: latest-worker has local ephemeral resource allocatable: 2358466523136 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one Mar 25 11:38:32.271: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.166f92e6d164e782], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7872/overcommit-0 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-0.166f92e78e5e4fab], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-0.166f92e994fed2d1], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.166f92e9cfb7261c], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.166f92e6d31ee500], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7872/overcommit-1 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-1.166f92e7bd04a58d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-1.166f92ea50ea5627], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.166f92ea69b5d545], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.166f92e6f1958b74], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7872/overcommit-10 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-10.166f92e9f09c803c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-10.166f92ec971999f0], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.166f92ecd08f3871], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.166f92e6f356b354], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7872/overcommit-11 to latest-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-11.166f92eb19152463], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-11.166f92edf10c50ec], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.166f92ee161ba28d], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.166f92e6f72a1850], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7872/overcommit-12 to latest-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-12.166f92eaf47e52d6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-12.166f92edf10ec9d3], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.166f92ee161a05c3], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.166f92e6f96e60f2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7872/overcommit-13 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-13.166f92ea680a6f55], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-13.166f92ec9e023a90], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.166f92ecd08e7296], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.166f92e6fcce58ee], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7872/overcommit-14 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-14.166f92ea72d4bd3a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-14.166f92ecf2d96cad], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.166f92ed4a4b9fbe], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.166f92e7012aa72c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7872/overcommit-15 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-15.166f92ea901f8ee3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-15.166f92ed6a7259d6], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.166f92edb2570d33], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.166f92e702cc12ee], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7872/overcommit-16 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-16.166f92ea9513579d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-16.166f92ecf05e6632], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.166f92ed4a48fbbf], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.166f92e7050e132d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7872/overcommit-17 to latest-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-17.166f92eb50554eaa], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-17.166f92edf10c512e], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.166f92ee161ab0cf], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.166f92e70b45c143], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7872/overcommit-18 to latest-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-18.166f92eb69adb4db], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-18.166f92ee040d24a6], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.166f92ee3532b15a], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.166f92e70bf0ad80], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7872/overcommit-19 to latest-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-19.166f92eb4fbc9f03], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-19.166f92edf10c7308], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.166f92ee161453e7], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.166f92e6d874a8c8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7872/overcommit-2 to latest-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-2.166f92e7950b4d18], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-2.166f92e8ef9967f2], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.166f92e929897f28], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.166f92e6da37cbd5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7872/overcommit-3 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.166f92e828a6963c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-3.166f92ea6a07e039], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.166f92ead7992719], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.166f92e6dcba0d35], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7872/overcommit-4 to latest-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-4.166f92e7f8d1128b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-4.166f92ea3115ba6f], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.166f92ea5fac47f1], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.166f92e6e5aff580], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7872/overcommit-5 to latest-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-5.166f92e8f12f3cfe], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-5.166f92ecee3ccbad], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.166f92ed383ef171], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.166f92e6e665a602], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7872/overcommit-6 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-6.166f92e8d29790d3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-6.166f92ec40eb26f9], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.166f92ec8a442940], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.166f92e6e9ec54fd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7872/overcommit-7 to latest-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-7.166f92ea0c09d85f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-7.166f92ed59c98572], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.166f92edb0f3a3b6], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.166f92e6edda263a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7872/overcommit-8 to latest-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-8.166f92eaee6694b2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-8.166f92ed933df0b3], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.166f92edfb972cdc], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.166f92e6ef599096], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7872/overcommit-9 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.166f92e9f9696b0a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-9.166f92ec9b1e7954], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.166f92ecd0901973], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.166f92f0f7caf679], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient ephemeral-storage.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:39:16.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7872" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:47.468 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":16,"completed":11,"skipped":4799,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:39:17.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Mar 25 11:39:19.706: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 25 11:39:20.835: INFO: Waiting for terminating namespaces to be deleted... Mar 25 11:39:21.479: INFO: Logging pods the apiserver thinks is on node latest-worker before test Mar 25 11:39:21.881: INFO: kindnet-bpcmh from kube-system started at 2021-03-25 11:19:46 +0000 UTC (1 container statuses recorded) Mar 25 11:39:21.881: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:39:21.881: INFO: kube-proxy-kjrrj from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 11:39:21.881: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:39:21.881: INFO: hostexec-latest-worker-8c4pd from persistent-local-volumes-test-9173 started at 2021-03-25 11:38:25 +0000 UTC (1 container statuses recorded) Mar 25 11:39:21.881: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 11:39:21.881: INFO: pod-4f0090bf-3dca-491d-b1e0-d42b7e34c629 from persistent-local-volumes-test-9173 started at 2021-03-25 11:38:41 +0000 UTC (1 container statuses recorded) Mar 25 11:39:21.881: INFO: Container write-pod ready: true, restart count 0 Mar 25 11:39:21.882: INFO: pod-97a52e03-3812-4f58-8b3f-e1fe43c5c708 from persistent-local-volumes-test-9173 started at 2021-03-25 11:39:11 +0000 UTC (1 container statuses recorded) Mar 25 11:39:21.882: INFO: Container write-pod ready: false, restart count 0 Mar 25 11:39:21.882: INFO: back-off-cap from pods-708 started at 2021-03-25 11:22:11 +0000 UTC (1 container statuses recorded) Mar 25 11:39:21.882: INFO: Container back-off-cap ready: false, restart count 8 Mar 25 11:39:21.882: INFO: overcommit-11 from sched-pred-7872 started at 2021-03-25 11:38:31 +0000 UTC (1 container statuses recorded) Mar 25 11:39:21.882: INFO: Container overcommit-11 ready: true, restart count 0 Mar 25 11:39:21.882: INFO: overcommit-12 from sched-pred-7872 started at 2021-03-25 11:38:31 +0000 UTC (1 container statuses recorded) Mar 25 11:39:21.882: INFO: Container overcommit-12 ready: true, restart count 0 Mar 25 11:39:21.882: INFO: overcommit-17 from sched-pred-7872 started at 2021-03-25 11:38:32 +0000 UTC (1 container statuses recorded) Mar 25 11:39:21.882: INFO: Container overcommit-17 ready: true, restart count 0 Mar 25 11:39:21.882: INFO: overcommit-18 from sched-pred-7872 started at 2021-03-25 11:38:32 +0000 UTC (1 container statuses recorded) Mar 25 11:39:21.882: INFO: Container overcommit-18 ready: true, restart count 0 Mar 25 11:39:21.882: INFO: overcommit-19 from sched-pred-7872 started at 2021-03-25 11:38:32 +0000 UTC (1 container statuses recorded) Mar 25 11:39:21.882: INFO: Container overcommit-19 ready: true, restart count 0 Mar 25 11:39:21.882: INFO: overcommit-2 from sched-pred-7872 started at 2021-03-25 11:38:31 +0000 UTC (1 container statuses recorded) Mar 25 11:39:21.882: INFO: Container overcommit-2 ready: true, restart count 0 Mar 25 11:39:21.882: INFO: overcommit-4 from sched-pred-7872 started at 2021-03-25 11:38:31 +0000 UTC (1 container statuses recorded) Mar 25 11:39:21.882: INFO: Container overcommit-4 ready: true, restart count 0 Mar 25 11:39:21.882: INFO: overcommit-5 from sched-pred-7872 started at 2021-03-25 11:38:31 +0000 UTC (1 container statuses recorded) Mar 25 11:39:21.882: INFO: Container overcommit-5 ready: true, restart count 0 Mar 25 11:39:21.882: INFO: overcommit-7 from sched-pred-7872 started at 2021-03-25 11:38:31 +0000 UTC (1 container statuses recorded) Mar 25 11:39:21.882: INFO: Container overcommit-7 ready: true, restart count 0 Mar 25 11:39:21.882: INFO: overcommit-8 from sched-pred-7872 started at 2021-03-25 11:38:31 +0000 UTC (1 container statuses recorded) Mar 25 11:39:21.882: INFO: Container overcommit-8 ready: true, restart count 0 Mar 25 11:39:21.882: INFO: affinity-nodeport-4mf74 from services-507 started at 2021-03-25 11:37:54 +0000 UTC (1 container statuses recorded) Mar 25 11:39:21.882: INFO: Container affinity-nodeport ready: true, restart count 0 Mar 25 11:39:21.882: INFO: execpod-affinityrhj5l from services-507 started at 2021-03-25 11:38:04 +0000 UTC (1 container statuses recorded) Mar 25 11:39:21.882: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 11:39:21.882: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Mar 25 11:39:22.144: INFO: pvc-volume-tester-gqglb from csi-mock-volumes-7987 started at 2021-03-24 09:41:54 +0000 UTC (1 container statuses recorded) Mar 25 11:39:22.144: INFO: Container volume-tester ready: false, restart count 0 Mar 25 11:39:22.144: INFO: kindnet-7xphn from kube-system started at 2021-03-24 20:36:55 +0000 UTC (1 container statuses recorded) Mar 25 11:39:22.144: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:39:22.144: INFO: kube-proxy-dv4wd from kube-system started at 2021-03-22 08:06:55 +0000 UTC (1 container statuses recorded) Mar 25 11:39:22.144: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:39:22.144: INFO: overcommit-0 from sched-pred-7872 started at 2021-03-25 11:38:31 +0000 UTC (1 container statuses recorded) Mar 25 11:39:22.144: INFO: Container overcommit-0 ready: true, restart count 0 Mar 25 11:39:22.144: INFO: overcommit-1 from sched-pred-7872 started at 2021-03-25 11:38:31 +0000 UTC (1 container statuses recorded) Mar 25 11:39:22.144: INFO: Container overcommit-1 ready: true, restart count 0 Mar 25 11:39:22.144: INFO: overcommit-10 from sched-pred-7872 started at 2021-03-25 11:38:31 +0000 UTC (1 container statuses recorded) Mar 25 11:39:22.144: INFO: Container overcommit-10 ready: true, restart count 0 Mar 25 11:39:22.144: INFO: overcommit-13 from sched-pred-7872 started at 2021-03-25 11:38:31 +0000 UTC (1 container statuses recorded) Mar 25 11:39:22.144: INFO: Container overcommit-13 ready: true, restart count 0 Mar 25 11:39:22.144: INFO: overcommit-14 from sched-pred-7872 started at 2021-03-25 11:38:32 +0000 UTC (1 container statuses recorded) Mar 25 11:39:22.144: INFO: Container overcommit-14 ready: true, restart count 0 Mar 25 11:39:22.144: INFO: overcommit-15 from sched-pred-7872 started at 2021-03-25 11:38:32 +0000 UTC (1 container statuses recorded) Mar 25 11:39:22.144: INFO: Container overcommit-15 ready: true, restart count 0 Mar 25 11:39:22.144: INFO: overcommit-16 from sched-pred-7872 started at 2021-03-25 11:38:32 +0000 UTC (1 container statuses recorded) Mar 25 11:39:22.144: INFO: Container overcommit-16 ready: true, restart count 0 Mar 25 11:39:22.144: INFO: overcommit-3 from sched-pred-7872 started at 2021-03-25 11:38:31 +0000 UTC (1 container statuses recorded) Mar 25 11:39:22.144: INFO: Container overcommit-3 ready: true, restart count 0 Mar 25 11:39:22.144: INFO: overcommit-6 from sched-pred-7872 started at 2021-03-25 11:38:31 +0000 UTC (1 container statuses recorded) Mar 25 11:39:22.144: INFO: Container overcommit-6 ready: true, restart count 0 Mar 25 11:39:22.144: INFO: overcommit-9 from sched-pred-7872 started at 2021-03-25 11:38:31 +0000 UTC (1 container statuses recorded) Mar 25 11:39:22.144: INFO: Container overcommit-9 ready: true, restart count 0 Mar 25 11:39:22.144: INFO: pod-with-label-security-s1 from sched-priority-2092 started at 2021-03-25 11:36:59 +0000 UTC (1 container statuses recorded) Mar 25 11:39:22.144: INFO: Container pod-with-label-security-s1 ready: false, restart count 0 Mar 25 11:39:22.144: INFO: affinity-nodeport-w5pbb from services-507 started at 2021-03-25 11:37:54 +0000 UTC (1 container statuses recorded) Mar 25 11:39:22.144: INFO: Container affinity-nodeport ready: true, restart count 0 Mar 25 11:39:22.144: INFO: affinity-nodeport-wbfzs from services-507 started at 2021-03-25 11:37:54 +0000 UTC (1 container statuses recorded) Mar 25 11:39:22.144: INFO: Container affinity-nodeport ready: true, restart count 0 Mar 25 11:39:22.144: INFO: sample-webhook-deployment-8977db-zsnrz from webhook-3288 started at 2021-03-25 11:39:01 +0000 UTC (1 container statuses recorded) Mar 25 11:39:22.144: INFO: Container sample-webhook ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-2db69094-7e58-4db5-aff1-c15e47e7274e 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.18.0.15 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.18.0.15 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-2db69094-7e58-4db5-aff1-c15e47e7274e off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-2db69094-7e58-4db5-aff1-c15e47e7274e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:40:12.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-915" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:56.090 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","total":16,"completed":12,"skipped":4996,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:40:13.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Mar 25 11:40:15.130: INFO: Waiting up to 1m0s for all nodes to be ready Mar 25 11:41:15.347: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node latest-worker. STEP: Apply 10 fake resource to node latest-worker2. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:42:56.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7167" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:163.975 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":16,"completed":13,"skipped":5312,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMar 25 11:42:57.290: INFO: Running AfterSuite actions on all nodes Mar 25 11:42:57.290: INFO: Running AfterSuite actions on node 1 Mar 25 11:42:57.290: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling/junit_01.xml {"msg":"Test Suite completed","total":16,"completed":13,"skipped":5724,"failed":0} Ran 13 of 5737 Specs in 1042.028 seconds SUCCESS! -- 13 Passed | 0 Failed | 0 Pending | 5724 Skipped PASS