I0524 20:16:51.548827 17 test_context.go:457] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0524 20:16:51.549011 17 e2e.go:129] Starting e2e run "649054fe-c9dc-4ca2-9852-23cc3d40f925" on Ginkgo node 1 {"msg":"Test Suite starting","total":12,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1621887409 - Will randomize all specs Will run 12 of 5667 specs May 24 20:16:51.578: INFO: >>> kubeConfig: /root/.kube/config May 24 20:16:51.581: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 24 20:16:51.612: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 24 20:16:51.681: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 24 20:16:51.681: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 24 20:16:51.681: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 24 20:16:51.691: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) May 24 20:16:51.691: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 24 20:16:51.691: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) May 24 20:16:51.691: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 24 20:16:51.691: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) May 24 20:16:51.691: INFO: e2e test version: v1.20.6 May 24 20:16:51.693: INFO: kube-apiserver version: v1.20.7 May 24 20:16:51.693: INFO: >>> kubeConfig: /root/.kube/config May 24 20:16:51.699: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:122 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:16:51.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred May 24 20:16:51.732: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 24 20:16:51.742: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 May 24 20:16:51.745: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 20:16:51.754: INFO: Waiting for terminating namespaces to be deleted... May 24 20:16:51.758: INFO: Logging pods the apiserver thinks is on node leguer-worker before test May 24 20:16:51.766: INFO: coredns-74ff55c5b-fbrvj from kube-system started at 2021-05-24 20:14:46 +0000 UTC (1 container statuses recorded) May 24 20:16:51.766: INFO: Container coredns ready: true, restart count 0 May 24 20:16:51.766: INFO: coredns-74ff55c5b-glnw8 from kube-system started at 2021-05-24 20:14:46 +0000 UTC (1 container statuses recorded) May 24 20:16:51.766: INFO: Container coredns ready: true, restart count 0 May 24 20:16:51.766: INFO: create-loop-devs-d9nvq from kube-system started at 2021-05-24 19:57:34 +0000 UTC (1 container statuses recorded) May 24 20:16:51.766: INFO: Container loopdev ready: true, restart count 0 May 24 20:16:51.766: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:16:51.766: INFO: Container kindnet-cni ready: true, restart count 13 May 24 20:16:51.766: INFO: kube-multus-ds-2n6bd from kube-system started at 2021-05-24 19:57:14 +0000 UTC (1 container statuses recorded) May 24 20:16:51.766: INFO: Container kube-multus ready: true, restart count 0 May 24 20:16:51.766: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:16:51.766: INFO: Container kube-proxy ready: true, restart count 0 May 24 20:16:51.766: INFO: tune-sysctls-xlbbr from kube-system started at 2021-05-24 19:57:08 +0000 UTC (1 container statuses recorded) May 24 20:16:51.766: INFO: Container setsysctls ready: true, restart count 0 May 24 20:16:51.766: INFO: speaker-9vpld from metallb-system started at 2021-05-24 19:57:08 +0000 UTC (1 container statuses recorded) May 24 20:16:51.766: INFO: Container speaker ready: true, restart count 0 May 24 20:16:51.766: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test May 24 20:16:51.774: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) May 24 20:16:51.774: INFO: Container loopdev ready: true, restart count 0 May 24 20:16:51.774: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:16:51.774: INFO: Container kindnet-cni ready: true, restart count 13 May 24 20:16:51.774: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 20:16:51.774: INFO: Container kube-multus ready: true, restart count 1 May 24 20:16:51.774: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:16:51.774: INFO: Container kube-proxy ready: true, restart count 0 May 24 20:16:51.774: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 20:16:51.774: INFO: Container setsysctls ready: true, restart count 0 May 24 20:16:51.774: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) May 24 20:16:51.774: INFO: Container controller ready: true, restart count 0 May 24 20:16:51.774: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) May 24 20:16:51.774: INFO: Container speaker ready: true, restart count 0 May 24 20:16:51.774: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) May 24 20:16:51.774: INFO: Container contour ready: true, restart count 0 May 24 20:16:51.774: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) May 24 20:16:51.774: INFO: Container contour ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:122 May 24 20:16:51.801: INFO: Pod coredns-74ff55c5b-fbrvj requesting local ephemeral resource =0 on Node leguer-worker May 24 20:16:51.801: INFO: Pod coredns-74ff55c5b-glnw8 requesting local ephemeral resource =0 on Node leguer-worker May 24 20:16:51.801: INFO: Pod create-loop-devs-d9nvq requesting local ephemeral resource =0 on Node leguer-worker May 24 20:16:51.801: INFO: Pod create-loop-devs-nbf25 requesting local ephemeral resource =0 on Node leguer-worker2 May 24 20:16:51.801: INFO: Pod kindnet-kx9mk requesting local ephemeral resource =0 on Node leguer-worker2 May 24 20:16:51.801: INFO: Pod kindnet-svp2q requesting local ephemeral resource =0 on Node leguer-worker May 24 20:16:51.801: INFO: Pod kube-multus-ds-2n6bd requesting local ephemeral resource =0 on Node leguer-worker May 24 20:16:51.801: INFO: Pod kube-multus-ds-n48bs requesting local ephemeral resource =0 on Node leguer-worker2 May 24 20:16:51.801: INFO: Pod kube-proxy-7g274 requesting local ephemeral resource =0 on Node leguer-worker May 24 20:16:51.801: INFO: Pod kube-proxy-mp68m requesting local ephemeral resource =0 on Node leguer-worker2 May 24 20:16:51.801: INFO: Pod tune-sysctls-vjdll requesting local ephemeral resource =0 on Node leguer-worker2 May 24 20:16:51.801: INFO: Pod tune-sysctls-xlbbr requesting local ephemeral resource =0 on Node leguer-worker May 24 20:16:51.801: INFO: Pod controller-675995489c-h2wms requesting local ephemeral resource =0 on Node leguer-worker2 May 24 20:16:51.801: INFO: Pod speaker-55zcr requesting local ephemeral resource =0 on Node leguer-worker2 May 24 20:16:51.801: INFO: Pod speaker-9vpld requesting local ephemeral resource =0 on Node leguer-worker May 24 20:16:51.801: INFO: Pod contour-6648989f79-2vldk requesting local ephemeral resource =0 on Node leguer-worker2 May 24 20:16:51.801: INFO: Pod contour-6648989f79-8gz4z requesting local ephemeral resource =0 on Node leguer-worker2 May 24 20:16:51.801: INFO: Using pod capacity: 47063248896 May 24 20:16:51.801: INFO: Node: leguer-worker2 has local ephemeral resource allocatable: 470632488960 May 24 20:16:51.801: INFO: Node: leguer-worker has local ephemeral resource allocatable: 470632488960 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one May 24 20:16:51.893: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.16821a0206e30f29], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4339/overcommit-0 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16821a0227bfce89], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.237/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16821a02390a40f3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16821a025214b9f2], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16821a025ec3cc81], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16821a0207292f15], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4339/overcommit-1 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16821a023f212873], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.69/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16821a0260194a9a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16821a0262c1bdc4], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16821a0271df33ae], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16821a0209e9fc5a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4339/overcommit-10 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16821a025b03ee0b], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.243/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16821a0272e5032a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16821a02765c9ac2], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16821a028ad4d13c], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16821a020a1d8e94], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4339/overcommit-11 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16821a025b030434], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.74/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16821a027204843b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16821a0275eb4cc3], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16821a028af3a50a], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16821a020a6d5880], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4339/overcommit-12 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16821a025b0f29ea], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.77/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16821a02734084a4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16821a0277acb43b], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16821a028ad60446], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16821a020a9ffa5d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4339/overcommit-13 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16821a025b2c6ee3], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.73/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16821a0272227fd1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16821a0276fc4916], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16821a028af0b792], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16821a020ae0dc07], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4339/overcommit-14 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16821a025b32354f], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.242/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16821a027333761e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16821a0277262f60], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16821a028af40757], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16821a020b0edef8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4339/overcommit-15 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16821a025db4180a], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.241/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16821a02711c2e23], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16821a0274e73dc8], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16821a028af21731], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16821a020b2ed011], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4339/overcommit-16 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16821a025b2bad68], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.78/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16821a02732404b8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16821a0277a4dd78], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16821a028ad64a73], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16821a020b58a577], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4339/overcommit-17 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16821a025b29935b], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.245/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16821a0271e7d523], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16821a0275eb4faa], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16821a028af1c409], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16821a020b85d005], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4339/overcommit-18 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16821a025b036eef], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.75/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16821a02728deaeb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16821a0276d285ca], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16821a028af237e7], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16821a020bbac85d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4339/overcommit-19 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16821a025b022f09], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.76/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16821a0272d43848], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16821a02774de5a4], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16821a028af2e88b], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16821a0207fc94cd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4339/overcommit-2 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16821a024175f507], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.238/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16821a02601cde94], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16821a0262c11cbe], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16821a0271e6a2df], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16821a02084a360d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4339/overcommit-3 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16821a025b16b36d], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.71/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16821a0272b8ec81], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16821a02772c281d], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16821a028af194f6], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16821a020893e9dc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4339/overcommit-4 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16821a025abc92d8], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.70/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16821a0273314799], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16821a0277b30587], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16821a028af04f8d], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16821a0208d128d2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4339/overcommit-5 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16821a025b032b68], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.240/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16821a0272f43e20], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16821a02770c081d], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16821a028aefeeb8], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16821a02090c7702], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4339/overcommit-6 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16821a025b095c8e], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.239/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16821a0272441f0e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16821a0276c1e4bc], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16821a028af4a853], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16821a020945f732], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4339/overcommit-7 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16821a025b0223d1], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.244/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16821a0272f0494b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16821a0276f18ca7], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16821a028ad4d12b], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16821a020982ad36], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4339/overcommit-8 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16821a025b30ca1f], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.246/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16821a027226b0f8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16821a027642711c], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16821a028aefbe3e], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16821a0209b51136], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4339/overcommit-9 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16821a025b0b9a78], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.72/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16821a02732051bd], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16821a02777d4421], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16821a028aeffded], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16821a0463a14ce9], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient ephemeral-storage.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:17:03.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4339" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:11.346 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:122 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":12,"completed":1,"skipped":177,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:17:03.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 24 20:17:03.090: INFO: Waiting up to 1m0s for all nodes to be ready May 24 20:18:03.142: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node leguer-worker. STEP: Apply 10 fake resource to node leguer-worker2. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node leguer-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:18:30.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-4827" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:87.394 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":12,"completed":2,"skipped":188,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:238 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:18:30.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:135 May 24 20:18:30.475: INFO: Waiting up to 1m0s for all nodes to be ready May 24 20:19:30.520: INFO: Waiting for terminating namespaces to be deleted... May 24 20:19:30.524: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 24 20:19:30.538: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 24 20:19:30.538: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:238 May 24 20:19:30.538: INFO: ComputeCPUMemFraction for node: leguer-worker May 24 20:19:30.552: INFO: Pod for on the node: coredns-74ff55c5b-fbrvj, Cpu: 100, Mem: 73400320 May 24 20:19:30.552: INFO: Pod for on the node: coredns-74ff55c5b-glnw8, Cpu: 100, Mem: 73400320 May 24 20:19:30.552: INFO: Pod for on the node: create-loop-devs-d9nvq, Cpu: 100, Mem: 209715200 May 24 20:19:30.552: INFO: Pod for on the node: kindnet-svp2q, Cpu: 100, Mem: 52428800 May 24 20:19:30.552: INFO: Pod for on the node: kube-multus-ds-2n6bd, Cpu: 100, Mem: 52428800 May 24 20:19:30.552: INFO: Pod for on the node: kube-proxy-7g274, Cpu: 100, Mem: 209715200 May 24 20:19:30.552: INFO: Pod for on the node: tune-sysctls-xlbbr, Cpu: 100, Mem: 209715200 May 24 20:19:30.552: INFO: Pod for on the node: speaker-9vpld, Cpu: 100, Mem: 209715200 May 24 20:19:30.552: INFO: Node: leguer-worker, totalRequestedCPUResource: 500, cpuAllocatableMil: 88000, cpuFraction: 0.005681818181818182 May 24 20:19:30.552: INFO: Node: leguer-worker, totalRequestedMemResource: 356515840, memAllocatableVal: 67430219776, memFraction: 0.005287181936887182 May 24 20:19:30.552: INFO: ComputeCPUMemFraction for node: leguer-worker2 May 24 20:19:30.564: INFO: Pod for on the node: create-loop-devs-nbf25, Cpu: 100, Mem: 209715200 May 24 20:19:30.564: INFO: Pod for on the node: kindnet-kx9mk, Cpu: 100, Mem: 52428800 May 24 20:19:30.564: INFO: Pod for on the node: kube-multus-ds-n48bs, Cpu: 100, Mem: 52428800 May 24 20:19:30.564: INFO: Pod for on the node: kube-proxy-mp68m, Cpu: 100, Mem: 209715200 May 24 20:19:30.564: INFO: Pod for on the node: tune-sysctls-vjdll, Cpu: 100, Mem: 209715200 May 24 20:19:30.564: INFO: Pod for on the node: controller-675995489c-h2wms, Cpu: 100, Mem: 209715200 May 24 20:19:30.564: INFO: Pod for on the node: speaker-55zcr, Cpu: 100, Mem: 209715200 May 24 20:19:30.564: INFO: Pod for on the node: contour-6648989f79-2vldk, Cpu: 100, Mem: 209715200 May 24 20:19:30.564: INFO: Pod for on the node: contour-6648989f79-8gz4z, Cpu: 100, Mem: 209715200 May 24 20:19:30.564: INFO: Node: leguer-worker2, totalRequestedCPUResource: 300, cpuAllocatableMil: 88000, cpuFraction: 0.003409090909090909 May 24 20:19:30.564: INFO: Node: leguer-worker2, totalRequestedMemResource: 209715200, memAllocatableVal: 67430219776, memFraction: 0.003110107021698342 May 24 20:19:30.574: INFO: Waiting for running... May 24 20:19:35.630: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 24 20:19:40.681: INFO: ComputeCPUMemFraction for node: leguer-worker May 24 20:19:40.696: INFO: Pod for on the node: coredns-74ff55c5b-fbrvj, Cpu: 100, Mem: 73400320 May 24 20:19:40.696: INFO: Pod for on the node: coredns-74ff55c5b-glnw8, Cpu: 100, Mem: 73400320 May 24 20:19:40.696: INFO: Pod for on the node: create-loop-devs-d9nvq, Cpu: 100, Mem: 209715200 May 24 20:19:40.696: INFO: Pod for on the node: kindnet-svp2q, Cpu: 100, Mem: 52428800 May 24 20:19:40.696: INFO: Pod for on the node: kube-multus-ds-2n6bd, Cpu: 100, Mem: 52428800 May 24 20:19:40.696: INFO: Pod for on the node: kube-proxy-7g274, Cpu: 100, Mem: 209715200 May 24 20:19:40.696: INFO: Pod for on the node: tune-sysctls-xlbbr, Cpu: 100, Mem: 209715200 May 24 20:19:40.696: INFO: Pod for on the node: speaker-9vpld, Cpu: 100, Mem: 209715200 May 24 20:19:40.696: INFO: Pod for on the node: 844beb84-9dd5-4920-b041-e8e55a783758-0, Cpu: 43500, Mem: 33358594048 May 24 20:19:40.696: INFO: Node: leguer-worker, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 May 24 20:19:40.696: INFO: Node: leguer-worker, totalRequestedMemResource: 33715109888, memAllocatableVal: 67430219776, memFraction: 0.5 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 24 20:19:40.696: INFO: ComputeCPUMemFraction for node: leguer-worker2 May 24 20:19:40.711: INFO: Pod for on the node: create-loop-devs-nbf25, Cpu: 100, Mem: 209715200 May 24 20:19:40.711: INFO: Pod for on the node: kindnet-kx9mk, Cpu: 100, Mem: 52428800 May 24 20:19:40.711: INFO: Pod for on the node: kube-multus-ds-n48bs, Cpu: 100, Mem: 52428800 May 24 20:19:40.711: INFO: Pod for on the node: kube-proxy-mp68m, Cpu: 100, Mem: 209715200 May 24 20:19:40.711: INFO: Pod for on the node: tune-sysctls-vjdll, Cpu: 100, Mem: 209715200 May 24 20:19:40.711: INFO: Pod for on the node: controller-675995489c-h2wms, Cpu: 100, Mem: 209715200 May 24 20:19:40.711: INFO: Pod for on the node: speaker-55zcr, Cpu: 100, Mem: 209715200 May 24 20:19:40.711: INFO: Pod for on the node: contour-6648989f79-2vldk, Cpu: 100, Mem: 209715200 May 24 20:19:40.711: INFO: Pod for on the node: contour-6648989f79-8gz4z, Cpu: 100, Mem: 209715200 May 24 20:19:40.711: INFO: Pod for on the node: b22a77e2-3441-433f-b989-6e6858f4bba3-0, Cpu: 43700, Mem: 33505394688 May 24 20:19:40.711: INFO: Node: leguer-worker2, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 May 24 20:19:40.711: INFO: Node: leguer-worker2, totalRequestedMemResource: 33715109888, memAllocatableVal: 67430219776, memFraction: 0.5 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-8831 to 1 STEP: Verify the pods should not scheduled to the node: leguer-worker STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-8831, will wait for the garbage collector to delete the pods May 24 20:19:53.234: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 155.808718ms May 24 20:19:54.734: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 1.500232826s [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:20:09.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-8831" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:132 • [SLOW TEST:99.186 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:238 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":12,"completed":3,"skipped":324,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:302 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:20:09.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:135 May 24 20:20:09.666: INFO: Waiting up to 1m0s for all nodes to be ready May 24 20:21:09.727: INFO: Waiting for terminating namespaces to be deleted... May 24 20:21:09.731: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 24 20:21:09.745: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 24 20:21:09.745: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:302 May 24 20:21:09.745: INFO: ComputeCPUMemFraction for node: leguer-worker May 24 20:21:09.759: INFO: Pod for on the node: coredns-74ff55c5b-fbrvj, Cpu: 100, Mem: 73400320 May 24 20:21:09.759: INFO: Pod for on the node: coredns-74ff55c5b-glnw8, Cpu: 100, Mem: 73400320 May 24 20:21:09.759: INFO: Pod for on the node: create-loop-devs-d9nvq, Cpu: 100, Mem: 209715200 May 24 20:21:09.759: INFO: Pod for on the node: kindnet-svp2q, Cpu: 100, Mem: 52428800 May 24 20:21:09.759: INFO: Pod for on the node: kube-multus-ds-2n6bd, Cpu: 100, Mem: 52428800 May 24 20:21:09.759: INFO: Pod for on the node: kube-proxy-7g274, Cpu: 100, Mem: 209715200 May 24 20:21:09.759: INFO: Pod for on the node: tune-sysctls-xlbbr, Cpu: 100, Mem: 209715200 May 24 20:21:09.759: INFO: Pod for on the node: speaker-9vpld, Cpu: 100, Mem: 209715200 May 24 20:21:09.759: INFO: Node: leguer-worker, totalRequestedCPUResource: 500, cpuAllocatableMil: 88000, cpuFraction: 0.005681818181818182 May 24 20:21:09.759: INFO: Node: leguer-worker, totalRequestedMemResource: 356515840, memAllocatableVal: 67430219776, memFraction: 0.005287181936887182 May 24 20:21:09.759: INFO: ComputeCPUMemFraction for node: leguer-worker2 May 24 20:21:09.774: INFO: Pod for on the node: create-loop-devs-nbf25, Cpu: 100, Mem: 209715200 May 24 20:21:09.774: INFO: Pod for on the node: kindnet-kx9mk, Cpu: 100, Mem: 52428800 May 24 20:21:09.774: INFO: Pod for on the node: kube-multus-ds-n48bs, Cpu: 100, Mem: 52428800 May 24 20:21:09.774: INFO: Pod for on the node: kube-proxy-mp68m, Cpu: 100, Mem: 209715200 May 24 20:21:09.774: INFO: Pod for on the node: tune-sysctls-vjdll, Cpu: 100, Mem: 209715200 May 24 20:21:09.774: INFO: Pod for on the node: controller-675995489c-h2wms, Cpu: 100, Mem: 209715200 May 24 20:21:09.774: INFO: Pod for on the node: speaker-55zcr, Cpu: 100, Mem: 209715200 May 24 20:21:09.774: INFO: Pod for on the node: contour-6648989f79-2vldk, Cpu: 100, Mem: 209715200 May 24 20:21:09.774: INFO: Pod for on the node: contour-6648989f79-8gz4z, Cpu: 100, Mem: 209715200 May 24 20:21:09.774: INFO: Node: leguer-worker2, totalRequestedCPUResource: 300, cpuAllocatableMil: 88000, cpuFraction: 0.003409090909090909 May 24 20:21:09.774: INFO: Node: leguer-worker2, totalRequestedMemResource: 209715200, memAllocatableVal: 67430219776, memFraction: 0.003110107021698342 May 24 20:21:09.784: INFO: Waiting for running... May 24 20:21:14.841: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 24 20:21:19.892: INFO: ComputeCPUMemFraction for node: leguer-worker May 24 20:21:19.941: INFO: Pod for on the node: coredns-74ff55c5b-fbrvj, Cpu: 100, Mem: 73400320 May 24 20:21:19.941: INFO: Pod for on the node: coredns-74ff55c5b-glnw8, Cpu: 100, Mem: 73400320 May 24 20:21:19.941: INFO: Pod for on the node: create-loop-devs-d9nvq, Cpu: 100, Mem: 209715200 May 24 20:21:19.941: INFO: Pod for on the node: kindnet-svp2q, Cpu: 100, Mem: 52428800 May 24 20:21:19.941: INFO: Pod for on the node: kube-multus-ds-2n6bd, Cpu: 100, Mem: 52428800 May 24 20:21:19.941: INFO: Pod for on the node: kube-proxy-7g274, Cpu: 100, Mem: 209715200 May 24 20:21:19.941: INFO: Pod for on the node: tune-sysctls-xlbbr, Cpu: 100, Mem: 209715200 May 24 20:21:19.941: INFO: Pod for on the node: speaker-9vpld, Cpu: 100, Mem: 209715200 May 24 20:21:19.941: INFO: Pod for on the node: c32d2eb6-cc75-4ef0-ac44-020d3c371e8c-0, Cpu: 43500, Mem: 33358594048 May 24 20:21:19.941: INFO: Node: leguer-worker, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 May 24 20:21:19.941: INFO: Node: leguer-worker, totalRequestedMemResource: 33715109888, memAllocatableVal: 67430219776, memFraction: 0.5 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 24 20:21:19.941: INFO: ComputeCPUMemFraction for node: leguer-worker2 May 24 20:21:19.957: INFO: Pod for on the node: create-loop-devs-nbf25, Cpu: 100, Mem: 209715200 May 24 20:21:19.957: INFO: Pod for on the node: kindnet-kx9mk, Cpu: 100, Mem: 52428800 May 24 20:21:19.957: INFO: Pod for on the node: kube-multus-ds-n48bs, Cpu: 100, Mem: 52428800 May 24 20:21:19.957: INFO: Pod for on the node: kube-proxy-mp68m, Cpu: 100, Mem: 209715200 May 24 20:21:19.957: INFO: Pod for on the node: tune-sysctls-vjdll, Cpu: 100, Mem: 209715200 May 24 20:21:19.957: INFO: Pod for on the node: controller-675995489c-h2wms, Cpu: 100, Mem: 209715200 May 24 20:21:19.957: INFO: Pod for on the node: speaker-55zcr, Cpu: 100, Mem: 209715200 May 24 20:21:19.957: INFO: Pod for on the node: contour-6648989f79-2vldk, Cpu: 100, Mem: 209715200 May 24 20:21:19.957: INFO: Pod for on the node: contour-6648989f79-8gz4z, Cpu: 100, Mem: 209715200 May 24 20:21:19.957: INFO: Pod for on the node: ffa1b0ea-2f6c-47df-85b2-d17d48c6e55e-0, Cpu: 43700, Mem: 33505394688 May 24 20:21:19.957: INFO: Node: leguer-worker2, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 May 24 20:21:19.957: INFO: Node: leguer-worker2, totalRequestedMemResource: 33715109888, memAllocatableVal: 67430219776, memFraction: 0.5 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-fcfa047b-49cf-4d6a-91fd-b41010c8d622=testing-taint-value-bc5dd595-8729-477f-bf1b-ef65e0dd880e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-1568bd9a-9f46-418e-a7f1-9217c70a09cc=testing-taint-value-32203e1c-adf3-4e9a-ace3-8bd88734423f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-8e88c22e-6199-416d-ac75-bb9d01b0ca3b=testing-taint-value-9e9041ed-deae-430d-a9b3-506d30e99d54:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-d51a282a-d45c-4732-b34f-da4fa6bbc1eb=testing-taint-value-eef89b96-4b33-4de9-acfb-25f70c694a7c:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-13f9c6ee-f24d-4dc6-a4c0-60032ac96c09=testing-taint-value-4503b1f8-d237-451e-a647-2ef270939265:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-140c5e33-7fe3-40be-9ed6-27a2639c2002=testing-taint-value-48d0fadb-43bd-4624-b450-813e7245982a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-92e041af-5918-4380-a7a8-58e70ef9a0cf=testing-taint-value-6fb2779e-e498-47fb-be83-51e65ff7bcb7:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-2fd17a57-b344-41b8-a401-6c7046538438=testing-taint-value-54fa6a0f-8740-4b61-a8b6-9e25541c4125:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-fd76e54c-3086-44da-a166-8b69890f3eb9=testing-taint-value-c61f9e2f-1171-4d06-bf91-ea92993539e8:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-b31f1541-94f5-42ef-822a-4a8d2efb9a32=testing-taint-value-70648eee-bc42-4213-a04b-84b6181b97d4:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-482a96ba-6ed7-44da-b6e3-8ff609165d20=testing-taint-value-e85c889b-b828-4d21-a215-0e5822737df2:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-7b0acbc8-90fd-4968-99af-ae40a984137c=testing-taint-value-eb22637a-e29b-414d-8459-7c10b0db45e7:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-d1d12119-2c96-4fba-a8f2-336a01055444=testing-taint-value-cdcf56df-361a-41c5-ac88-f2a03dc51d50:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-36f6d0d8-e67d-4c61-aeb3-d7cee315e8c1=testing-taint-value-e1b9365e-f2a3-4e47-b9c2-71fd2774f5a3:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-afde1f09-e515-40b7-b574-d030d6b36436=testing-taint-value-6cbaf2e3-0ffe-46b1-a0cb-f11d35fa5ae3:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-eb36b774-5745-4d0b-a745-4479f64377b5=testing-taint-value-8e1f1f1d-4bbd-4794-ad8a-1e7e59c9e9b1:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-19bc5458-93ce-4e4e-b711-1495f2b846ea=testing-taint-value-81bda9ff-827c-47eb-91ac-238d0d6dbb15:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-9362011f-9950-48b2-b688-1592268a97fa=testing-taint-value-1bc70a0c-3b1a-4611-b474-0fde884adbd2:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-c41c5b90-7d28-4577-92e6-69308ea05970=testing-taint-value-8fd4bdbf-7dc1-4ccb-bcee-4137f14b9222:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-52727fbf-7fd0-4643-9491-1c860046d79e=testing-taint-value-f15b63d2-4932-40b4-9d3f-e4b83f283ce4:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-52727fbf-7fd0-4643-9491-1c860046d79e=testing-taint-value-f15b63d2-4932-40b4-9d3f-e4b83f283ce4:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-c41c5b90-7d28-4577-92e6-69308ea05970=testing-taint-value-8fd4bdbf-7dc1-4ccb-bcee-4137f14b9222:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-9362011f-9950-48b2-b688-1592268a97fa=testing-taint-value-1bc70a0c-3b1a-4611-b474-0fde884adbd2:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-19bc5458-93ce-4e4e-b711-1495f2b846ea=testing-taint-value-81bda9ff-827c-47eb-91ac-238d0d6dbb15:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-eb36b774-5745-4d0b-a745-4479f64377b5=testing-taint-value-8e1f1f1d-4bbd-4794-ad8a-1e7e59c9e9b1:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-afde1f09-e515-40b7-b574-d030d6b36436=testing-taint-value-6cbaf2e3-0ffe-46b1-a0cb-f11d35fa5ae3:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-36f6d0d8-e67d-4c61-aeb3-d7cee315e8c1=testing-taint-value-e1b9365e-f2a3-4e47-b9c2-71fd2774f5a3:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-d1d12119-2c96-4fba-a8f2-336a01055444=testing-taint-value-cdcf56df-361a-41c5-ac88-f2a03dc51d50:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-7b0acbc8-90fd-4968-99af-ae40a984137c=testing-taint-value-eb22637a-e29b-414d-8459-7c10b0db45e7:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-482a96ba-6ed7-44da-b6e3-8ff609165d20=testing-taint-value-e85c889b-b828-4d21-a215-0e5822737df2:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-b31f1541-94f5-42ef-822a-4a8d2efb9a32=testing-taint-value-70648eee-bc42-4213-a04b-84b6181b97d4:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-fd76e54c-3086-44da-a166-8b69890f3eb9=testing-taint-value-c61f9e2f-1171-4d06-bf91-ea92993539e8:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-2fd17a57-b344-41b8-a401-6c7046538438=testing-taint-value-54fa6a0f-8740-4b61-a8b6-9e25541c4125:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-92e041af-5918-4380-a7a8-58e70ef9a0cf=testing-taint-value-6fb2779e-e498-47fb-be83-51e65ff7bcb7:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-140c5e33-7fe3-40be-9ed6-27a2639c2002=testing-taint-value-48d0fadb-43bd-4624-b450-813e7245982a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-13f9c6ee-f24d-4dc6-a4c0-60032ac96c09=testing-taint-value-4503b1f8-d237-451e-a647-2ef270939265:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-d51a282a-d45c-4732-b34f-da4fa6bbc1eb=testing-taint-value-eef89b96-4b33-4de9-acfb-25f70c694a7c:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-8e88c22e-6199-416d-ac75-bb9d01b0ca3b=testing-taint-value-9e9041ed-deae-430d-a9b3-506d30e99d54:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-1568bd9a-9f46-418e-a7f1-9217c70a09cc=testing-taint-value-32203e1c-adf3-4e9a-ace3-8bd88734423f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-fcfa047b-49cf-4d6a-91fd-b41010c8d622=testing-taint-value-bc5dd595-8729-477f-bf1b-ef65e0dd880e:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:21:38.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-8008" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:132 • [SLOW TEST:88.771 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:302 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":12,"completed":4,"skipped":488,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:621 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:21:38.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 May 24 20:21:38.442: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 20:21:38.451: INFO: Waiting for terminating namespaces to be deleted... May 24 20:21:38.455: INFO: Logging pods the apiserver thinks is on node leguer-worker before test May 24 20:21:38.464: INFO: coredns-74ff55c5b-fbrvj from kube-system started at 2021-05-24 20:14:46 +0000 UTC (1 container statuses recorded) May 24 20:21:38.464: INFO: Container coredns ready: true, restart count 0 May 24 20:21:38.464: INFO: coredns-74ff55c5b-glnw8 from kube-system started at 2021-05-24 20:14:46 +0000 UTC (1 container statuses recorded) May 24 20:21:38.464: INFO: Container coredns ready: true, restart count 0 May 24 20:21:38.464: INFO: create-loop-devs-d9nvq from kube-system started at 2021-05-24 19:57:34 +0000 UTC (1 container statuses recorded) May 24 20:21:38.464: INFO: Container loopdev ready: true, restart count 0 May 24 20:21:38.464: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:21:38.464: INFO: Container kindnet-cni ready: true, restart count 13 May 24 20:21:38.464: INFO: kube-multus-ds-2n6bd from kube-system started at 2021-05-24 19:57:14 +0000 UTC (1 container statuses recorded) May 24 20:21:38.464: INFO: Container kube-multus ready: true, restart count 0 May 24 20:21:38.464: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:21:38.464: INFO: Container kube-proxy ready: true, restart count 0 May 24 20:21:38.464: INFO: tune-sysctls-xlbbr from kube-system started at 2021-05-24 19:57:08 +0000 UTC (1 container statuses recorded) May 24 20:21:38.464: INFO: Container setsysctls ready: true, restart count 0 May 24 20:21:38.464: INFO: speaker-9vpld from metallb-system started at 2021-05-24 19:57:08 +0000 UTC (1 container statuses recorded) May 24 20:21:38.464: INFO: Container speaker ready: true, restart count 0 May 24 20:21:38.464: INFO: with-tolerations from sched-priority-8008 started at 2021-05-24 20:21:21 +0000 UTC (1 container statuses recorded) May 24 20:21:38.464: INFO: Container with-tolerations ready: true, restart count 0 May 24 20:21:38.464: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test May 24 20:21:38.472: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) May 24 20:21:38.472: INFO: Container loopdev ready: true, restart count 0 May 24 20:21:38.472: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:21:38.472: INFO: Container kindnet-cni ready: true, restart count 13 May 24 20:21:38.472: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 20:21:38.472: INFO: Container kube-multus ready: true, restart count 1 May 24 20:21:38.472: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:21:38.472: INFO: Container kube-proxy ready: true, restart count 0 May 24 20:21:38.472: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 20:21:38.472: INFO: Container setsysctls ready: true, restart count 0 May 24 20:21:38.472: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) May 24 20:21:38.472: INFO: Container controller ready: true, restart count 0 May 24 20:21:38.472: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) May 24 20:21:38.472: INFO: Container speaker ready: true, restart count 0 May 24 20:21:38.472: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) May 24 20:21:38.472: INFO: Container contour ready: true, restart count 0 May 24 20:21:38.472: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) May 24 20:21:38.472: INFO: Container contour ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:621 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-45d17f4d-06be-493f-86bc-139b407dfa3e=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-b2d37750-ab57-4ff8-8b8a-b6da2ad55982 testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16821a44c5b82883], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6197/without-toleration to leguer-worker] STEP: Considering event: Type = [Normal], Name = [without-toleration.16821a44e3143d07], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.252/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.16821a44efe501e0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-toleration.16821a44f11ed0c1], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16821a4503f06601], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16821a454baad80b], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16821a455e8292e5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity, 1 node(s) had taint {kubernetes.io/e2e-taint-key-45d17f4d-06be-493f-86bc-139b407dfa3e: testing-taint-value}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16821a455e8292e5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity, 1 node(s) had taint {kubernetes.io/e2e-taint-key-45d17f4d-06be-493f-86bc-139b407dfa3e: testing-taint-value}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16821a44c5b82883], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6197/without-toleration to leguer-worker] STEP: Considering event: Type = [Normal], Name = [without-toleration.16821a44e3143d07], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.252/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.16821a44efe501e0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-toleration.16821a44f11ed0c1], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16821a4503f06601], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16821a454baad80b], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-45d17f4d-06be-493f-86bc-139b407dfa3e=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16821a4603ec1348], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6197/still-no-tolerations to leguer-worker] STEP: removing the label kubernetes.io/e2e-label-key-b2d37750-ab57-4ff8-8b8a-b6da2ad55982 off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-b2d37750-ab57-4ff8-8b8a-b6da2ad55982 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-45d17f4d-06be-493f-86bc-139b407dfa3e=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:21:44.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6197" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:5.933 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:621 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":12,"completed":5,"skipped":870,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:271 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:21:44.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 May 24 20:21:44.382: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 20:21:44.390: INFO: Waiting for terminating namespaces to be deleted... May 24 20:21:44.394: INFO: Logging pods the apiserver thinks is on node leguer-worker before test May 24 20:21:44.402: INFO: coredns-74ff55c5b-fbrvj from kube-system started at 2021-05-24 20:14:46 +0000 UTC (1 container statuses recorded) May 24 20:21:44.402: INFO: Container coredns ready: true, restart count 0 May 24 20:21:44.402: INFO: coredns-74ff55c5b-glnw8 from kube-system started at 2021-05-24 20:14:46 +0000 UTC (1 container statuses recorded) May 24 20:21:44.402: INFO: Container coredns ready: true, restart count 0 May 24 20:21:44.402: INFO: create-loop-devs-d9nvq from kube-system started at 2021-05-24 19:57:34 +0000 UTC (1 container statuses recorded) May 24 20:21:44.402: INFO: Container loopdev ready: true, restart count 0 May 24 20:21:44.402: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:21:44.402: INFO: Container kindnet-cni ready: true, restart count 13 May 24 20:21:44.402: INFO: kube-multus-ds-2n6bd from kube-system started at 2021-05-24 19:57:14 +0000 UTC (1 container statuses recorded) May 24 20:21:44.402: INFO: Container kube-multus ready: true, restart count 0 May 24 20:21:44.402: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:21:44.402: INFO: Container kube-proxy ready: true, restart count 0 May 24 20:21:44.402: INFO: tune-sysctls-xlbbr from kube-system started at 2021-05-24 19:57:08 +0000 UTC (1 container statuses recorded) May 24 20:21:44.402: INFO: Container setsysctls ready: true, restart count 0 May 24 20:21:44.402: INFO: speaker-9vpld from metallb-system started at 2021-05-24 19:57:08 +0000 UTC (1 container statuses recorded) May 24 20:21:44.402: INFO: Container speaker ready: true, restart count 0 May 24 20:21:44.402: INFO: still-no-tolerations from sched-pred-6197 started at 2021-05-24 20:21:43 +0000 UTC (1 container statuses recorded) May 24 20:21:44.402: INFO: Container still-no-tolerations ready: false, restart count 0 May 24 20:21:44.402: INFO: with-tolerations from sched-priority-8008 started at 2021-05-24 20:21:21 +0000 UTC (1 container statuses recorded) May 24 20:21:44.402: INFO: Container with-tolerations ready: true, restart count 0 May 24 20:21:44.402: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test May 24 20:21:44.409: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) May 24 20:21:44.409: INFO: Container loopdev ready: true, restart count 0 May 24 20:21:44.409: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:21:44.409: INFO: Container kindnet-cni ready: true, restart count 13 May 24 20:21:44.409: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 20:21:44.409: INFO: Container kube-multus ready: true, restart count 1 May 24 20:21:44.409: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:21:44.409: INFO: Container kube-proxy ready: true, restart count 0 May 24 20:21:44.409: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 20:21:44.409: INFO: Container setsysctls ready: true, restart count 0 May 24 20:21:44.409: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) May 24 20:21:44.409: INFO: Container controller ready: true, restart count 0 May 24 20:21:44.409: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) May 24 20:21:44.409: INFO: Container speaker ready: true, restart count 0 May 24 20:21:44.409: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) May 24 20:21:44.409: INFO: Container contour ready: true, restart count 0 May 24 20:21:44.409: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) May 24 20:21:44.409: INFO: Container contour ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:216 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:271 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-bb954fc2-3483-4711-a489-e18739a9a17b.16821a46a7edd96f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Normal], Name = [filler-pod-bb954fc2-3483-4711-a489-e18739a9a17b.16821a4872015c55], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9512/filler-pod-bb954fc2-3483-4711-a489-e18739a9a17b to leguer-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-bb954fc2-3483-4711-a489-e18739a9a17b.16821a48901cdf01], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.2/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-bb954fc2-3483-4711-a489-e18739a9a17b.16821a48a6516e5e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-bb954fc2-3483-4711-a489-e18739a9a17b.16821a48a7639818], Reason = [Created], Message = [Created container filler-pod-bb954fc2-3483-4711-a489-e18739a9a17b] STEP: Considering event: Type = [Normal], Name = [filler-pod-bb954fc2-3483-4711-a489-e18739a9a17b.16821a48b097290b], Reason = [Started], Message = [Started container filler-pod-bb954fc2-3483-4711-a489-e18739a9a17b] STEP: Considering event: Type = [Normal], Name = [without-label.16821a462e91dde1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9512/without-label to leguer-worker] STEP: Considering event: Type = [Normal], Name = [without-label.16821a464c9fd3de], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.254/24]] STEP: Considering event: Type = [Normal], Name = [without-label.16821a4662bf4d81], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-label.16821a4664c30494], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16821a466cb6500c], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16821a46a6c1c86e], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [without-label.16821a46dc578215], Reason = [Failed], Message = [Error: failed to create containerd task: failed to create shim: OCI runtime create failed: container_linux.go:364: creating new parent process caused: container_linux.go:1991: running lstat on namespace path "/proc/1786450/ns/ipc" caused: lstat /proc/1786450/ns/ipc: no such file or directory: unknown] STEP: Considering event: Type = [Warning], Name = [additional-podeaa3b282-fa36-440e-8486-b43e966cb31c.16821a49000fbf50], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient example.com/beardsecond.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:251 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:21:57.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9512" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:13.480 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:211 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:271 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":12,"completed":6,"skipped":1459,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:530 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:21:57.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 May 24 20:21:57.956: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 20:21:57.965: INFO: Waiting for terminating namespaces to be deleted... May 24 20:21:57.968: INFO: Logging pods the apiserver thinks is on node leguer-worker before test May 24 20:21:57.977: INFO: coredns-74ff55c5b-fbrvj from kube-system started at 2021-05-24 20:14:46 +0000 UTC (1 container statuses recorded) May 24 20:21:57.977: INFO: Container coredns ready: true, restart count 0 May 24 20:21:57.977: INFO: coredns-74ff55c5b-glnw8 from kube-system started at 2021-05-24 20:14:46 +0000 UTC (1 container statuses recorded) May 24 20:21:57.977: INFO: Container coredns ready: true, restart count 0 May 24 20:21:57.977: INFO: create-loop-devs-d9nvq from kube-system started at 2021-05-24 19:57:34 +0000 UTC (1 container statuses recorded) May 24 20:21:57.977: INFO: Container loopdev ready: true, restart count 0 May 24 20:21:57.977: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:21:57.977: INFO: Container kindnet-cni ready: true, restart count 13 May 24 20:21:57.977: INFO: kube-multus-ds-2n6bd from kube-system started at 2021-05-24 19:57:14 +0000 UTC (1 container statuses recorded) May 24 20:21:57.977: INFO: Container kube-multus ready: true, restart count 0 May 24 20:21:57.977: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:21:57.977: INFO: Container kube-proxy ready: true, restart count 0 May 24 20:21:57.977: INFO: tune-sysctls-xlbbr from kube-system started at 2021-05-24 19:57:08 +0000 UTC (1 container statuses recorded) May 24 20:21:57.977: INFO: Container setsysctls ready: true, restart count 0 May 24 20:21:57.977: INFO: speaker-9vpld from metallb-system started at 2021-05-24 19:57:08 +0000 UTC (1 container statuses recorded) May 24 20:21:57.977: INFO: Container speaker ready: true, restart count 0 May 24 20:21:57.977: INFO: filler-pod-bb954fc2-3483-4711-a489-e18739a9a17b from sched-pred-9512 started at 2021-05-24 20:21:54 +0000 UTC (1 container statuses recorded) May 24 20:21:57.977: INFO: Container filler-pod-bb954fc2-3483-4711-a489-e18739a9a17b ready: true, restart count 0 May 24 20:21:57.977: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test May 24 20:21:57.985: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) May 24 20:21:57.985: INFO: Container loopdev ready: true, restart count 0 May 24 20:21:57.985: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:21:57.985: INFO: Container kindnet-cni ready: true, restart count 13 May 24 20:21:57.985: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 20:21:57.985: INFO: Container kube-multus ready: true, restart count 1 May 24 20:21:57.985: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:21:57.985: INFO: Container kube-proxy ready: true, restart count 0 May 24 20:21:57.985: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 20:21:57.985: INFO: Container setsysctls ready: true, restart count 0 May 24 20:21:57.985: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) May 24 20:21:57.985: INFO: Container controller ready: true, restart count 0 May 24 20:21:57.985: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) May 24 20:21:57.985: INFO: Container speaker ready: true, restart count 0 May 24 20:21:57.985: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) May 24 20:21:57.985: INFO: Container contour ready: true, restart count 0 May 24 20:21:57.985: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) May 24 20:21:57.985: INFO: Container contour ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:530 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-1674171d-b218-46c8-971f-b2122a123be3 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-1674171d-b218-46c8-971f-b2122a123be3 off the node leguer-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-1674171d-b218-46c8-971f-b2122a123be3 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:22:09.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1953" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:11.512 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:530 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":12,"completed":7,"skipped":1770,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:489 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:22:09.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 May 24 20:22:09.383: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 20:22:09.392: INFO: Waiting for terminating namespaces to be deleted... May 24 20:22:09.396: INFO: Logging pods the apiserver thinks is on node leguer-worker before test May 24 20:22:09.404: INFO: coredns-74ff55c5b-fbrvj from kube-system started at 2021-05-24 20:14:46 +0000 UTC (1 container statuses recorded) May 24 20:22:09.404: INFO: Container coredns ready: true, restart count 0 May 24 20:22:09.404: INFO: coredns-74ff55c5b-glnw8 from kube-system started at 2021-05-24 20:14:46 +0000 UTC (1 container statuses recorded) May 24 20:22:09.404: INFO: Container coredns ready: true, restart count 0 May 24 20:22:09.404: INFO: create-loop-devs-d9nvq from kube-system started at 2021-05-24 19:57:34 +0000 UTC (1 container statuses recorded) May 24 20:22:09.404: INFO: Container loopdev ready: true, restart count 0 May 24 20:22:09.404: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:22:09.404: INFO: Container kindnet-cni ready: true, restart count 13 May 24 20:22:09.404: INFO: kube-multus-ds-2n6bd from kube-system started at 2021-05-24 19:57:14 +0000 UTC (1 container statuses recorded) May 24 20:22:09.404: INFO: Container kube-multus ready: true, restart count 0 May 24 20:22:09.404: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:22:09.404: INFO: Container kube-proxy ready: true, restart count 0 May 24 20:22:09.404: INFO: tune-sysctls-xlbbr from kube-system started at 2021-05-24 19:57:08 +0000 UTC (1 container statuses recorded) May 24 20:22:09.404: INFO: Container setsysctls ready: true, restart count 0 May 24 20:22:09.404: INFO: speaker-9vpld from metallb-system started at 2021-05-24 19:57:08 +0000 UTC (1 container statuses recorded) May 24 20:22:09.404: INFO: Container speaker ready: true, restart count 0 May 24 20:22:09.404: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test May 24 20:22:09.412: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) May 24 20:22:09.412: INFO: Container loopdev ready: true, restart count 0 May 24 20:22:09.412: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:22:09.412: INFO: Container kindnet-cni ready: true, restart count 13 May 24 20:22:09.412: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 20:22:09.412: INFO: Container kube-multus ready: true, restart count 1 May 24 20:22:09.412: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:22:09.412: INFO: Container kube-proxy ready: true, restart count 0 May 24 20:22:09.412: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 20:22:09.412: INFO: Container setsysctls ready: true, restart count 0 May 24 20:22:09.412: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) May 24 20:22:09.412: INFO: Container controller ready: true, restart count 0 May 24 20:22:09.412: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) May 24 20:22:09.412: INFO: Container speaker ready: true, restart count 0 May 24 20:22:09.412: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) May 24 20:22:09.412: INFO: Container contour ready: true, restart count 0 May 24 20:22:09.412: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) May 24 20:22:09.412: INFO: Container contour ready: true, restart count 0 May 24 20:22:09.412: INFO: with-labels from sched-pred-1953 started at 2021-05-24 20:22:03 +0000 UTC (1 container statuses recorded) May 24 20:22:09.412: INFO: Container with-labels ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:489 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16821a4c007cbfd0], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:22:10.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2008" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":12,"completed":8,"skipped":1782,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:154 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:22:10.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:135 May 24 20:22:10.584: INFO: Waiting up to 1m0s for all nodes to be ready May 24 20:23:10.628: INFO: Waiting for terminating namespaces to be deleted... May 24 20:23:10.632: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 24 20:23:10.646: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 24 20:23:10.646: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:154 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname May 24 20:23:12.671: INFO: ComputeCPUMemFraction for node: leguer-worker May 24 20:23:12.685: INFO: Pod for on the node: coredns-74ff55c5b-fbrvj, Cpu: 100, Mem: 73400320 May 24 20:23:12.685: INFO: Pod for on the node: coredns-74ff55c5b-glnw8, Cpu: 100, Mem: 73400320 May 24 20:23:12.685: INFO: Pod for on the node: create-loop-devs-d9nvq, Cpu: 100, Mem: 209715200 May 24 20:23:12.685: INFO: Pod for on the node: kindnet-svp2q, Cpu: 100, Mem: 52428800 May 24 20:23:12.685: INFO: Pod for on the node: kube-multus-ds-2n6bd, Cpu: 100, Mem: 52428800 May 24 20:23:12.685: INFO: Pod for on the node: kube-proxy-7g274, Cpu: 100, Mem: 209715200 May 24 20:23:12.685: INFO: Pod for on the node: tune-sysctls-xlbbr, Cpu: 100, Mem: 209715200 May 24 20:23:12.685: INFO: Pod for on the node: speaker-9vpld, Cpu: 100, Mem: 209715200 May 24 20:23:12.685: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 24 20:23:12.685: INFO: Node: leguer-worker, totalRequestedCPUResource: 500, cpuAllocatableMil: 88000, cpuFraction: 0.005681818181818182 May 24 20:23:12.685: INFO: Node: leguer-worker, totalRequestedMemResource: 356515840, memAllocatableVal: 67430219776, memFraction: 0.005287181936887182 May 24 20:23:12.685: INFO: ComputeCPUMemFraction for node: leguer-worker2 May 24 20:23:12.699: INFO: Pod for on the node: create-loop-devs-nbf25, Cpu: 100, Mem: 209715200 May 24 20:23:12.699: INFO: Pod for on the node: kindnet-kx9mk, Cpu: 100, Mem: 52428800 May 24 20:23:12.699: INFO: Pod for on the node: kube-multus-ds-n48bs, Cpu: 100, Mem: 52428800 May 24 20:23:12.699: INFO: Pod for on the node: kube-proxy-mp68m, Cpu: 100, Mem: 209715200 May 24 20:23:12.699: INFO: Pod for on the node: tune-sysctls-vjdll, Cpu: 100, Mem: 209715200 May 24 20:23:12.699: INFO: Pod for on the node: controller-675995489c-h2wms, Cpu: 100, Mem: 209715200 May 24 20:23:12.699: INFO: Pod for on the node: speaker-55zcr, Cpu: 100, Mem: 209715200 May 24 20:23:12.699: INFO: Pod for on the node: contour-6648989f79-2vldk, Cpu: 100, Mem: 209715200 May 24 20:23:12.699: INFO: Pod for on the node: contour-6648989f79-8gz4z, Cpu: 100, Mem: 209715200 May 24 20:23:12.699: INFO: Node: leguer-worker2, totalRequestedCPUResource: 300, cpuAllocatableMil: 88000, cpuFraction: 0.003409090909090909 May 24 20:23:12.699: INFO: Node: leguer-worker2, totalRequestedMemResource: 209715200, memAllocatableVal: 67430219776, memFraction: 0.003110107021698342 May 24 20:23:12.705: INFO: Waiting for running... May 24 20:23:17.761: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 24 20:23:22.811: INFO: ComputeCPUMemFraction for node: leguer-worker May 24 20:23:22.828: INFO: Pod for on the node: coredns-74ff55c5b-fbrvj, Cpu: 100, Mem: 73400320 May 24 20:23:22.828: INFO: Pod for on the node: coredns-74ff55c5b-glnw8, Cpu: 100, Mem: 73400320 May 24 20:23:22.828: INFO: Pod for on the node: create-loop-devs-d9nvq, Cpu: 100, Mem: 209715200 May 24 20:23:22.828: INFO: Pod for on the node: kindnet-svp2q, Cpu: 100, Mem: 52428800 May 24 20:23:22.828: INFO: Pod for on the node: kube-multus-ds-2n6bd, Cpu: 100, Mem: 52428800 May 24 20:23:22.828: INFO: Pod for on the node: kube-proxy-7g274, Cpu: 100, Mem: 209715200 May 24 20:23:22.828: INFO: Pod for on the node: tune-sysctls-xlbbr, Cpu: 100, Mem: 209715200 May 24 20:23:22.828: INFO: Pod for on the node: speaker-9vpld, Cpu: 100, Mem: 209715200 May 24 20:23:22.828: INFO: Pod for on the node: 914fac49-18a3-4606-bf9f-c4cde5569a0f-0, Cpu: 52299, Mem: 40101616025 May 24 20:23:22.828: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 24 20:23:22.828: INFO: Node: leguer-worker, totalRequestedCPUResource: 52799, cpuAllocatableMil: 88000, cpuFraction: 0.5999886363636364 May 24 20:23:22.828: INFO: Node: leguer-worker, totalRequestedMemResource: 40458131865, memAllocatableVal: 67430219776, memFraction: 0.5999999999911019 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 24 20:23:22.829: INFO: ComputeCPUMemFraction for node: leguer-worker2 May 24 20:23:22.843: INFO: Pod for on the node: create-loop-devs-nbf25, Cpu: 100, Mem: 209715200 May 24 20:23:22.843: INFO: Pod for on the node: kindnet-kx9mk, Cpu: 100, Mem: 52428800 May 24 20:23:22.843: INFO: Pod for on the node: kube-multus-ds-n48bs, Cpu: 100, Mem: 52428800 May 24 20:23:22.843: INFO: Pod for on the node: kube-proxy-mp68m, Cpu: 100, Mem: 209715200 May 24 20:23:22.843: INFO: Pod for on the node: tune-sysctls-vjdll, Cpu: 100, Mem: 209715200 May 24 20:23:22.843: INFO: Pod for on the node: controller-675995489c-h2wms, Cpu: 100, Mem: 209715200 May 24 20:23:22.843: INFO: Pod for on the node: speaker-55zcr, Cpu: 100, Mem: 209715200 May 24 20:23:22.843: INFO: Pod for on the node: contour-6648989f79-2vldk, Cpu: 100, Mem: 209715200 May 24 20:23:22.843: INFO: Pod for on the node: contour-6648989f79-8gz4z, Cpu: 100, Mem: 209715200 May 24 20:23:22.843: INFO: Pod for on the node: 62c8cc65-5ad2-42b7-a3c5-ad03289f8c2f-0, Cpu: 52500, Mem: 40248416665 May 24 20:23:22.843: INFO: Node: leguer-worker2, totalRequestedCPUResource: 52800, cpuAllocatableMil: 88000, cpuFraction: 0.6 May 24 20:23:22.843: INFO: Node: leguer-worker2, totalRequestedMemResource: 40458131865, memAllocatableVal: 67430219776, memFraction: 0.5999999999911019 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:23:39.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-7920" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:132 • [SLOW TEST:88.688 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:154 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":12,"completed":9,"skipped":1980,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:358 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:23:39.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:135 May 24 20:23:39.288: INFO: Waiting up to 1m0s for all nodes to be ready May 24 20:24:39.333: INFO: Waiting for terminating namespaces to be deleted... May 24 20:24:39.337: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 24 20:24:39.354: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 24 20:24:39.354: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:344 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:358 May 24 20:24:43.545: INFO: ComputeCPUMemFraction for node: leguer-worker May 24 20:24:43.558: INFO: Pod for on the node: coredns-74ff55c5b-fbrvj, Cpu: 100, Mem: 73400320 May 24 20:24:43.558: INFO: Pod for on the node: coredns-74ff55c5b-glnw8, Cpu: 100, Mem: 73400320 May 24 20:24:43.558: INFO: Pod for on the node: create-loop-devs-d9nvq, Cpu: 100, Mem: 209715200 May 24 20:24:43.558: INFO: Pod for on the node: kindnet-svp2q, Cpu: 100, Mem: 52428800 May 24 20:24:43.558: INFO: Pod for on the node: kube-multus-ds-2n6bd, Cpu: 100, Mem: 52428800 May 24 20:24:43.558: INFO: Pod for on the node: kube-proxy-7g274, Cpu: 100, Mem: 209715200 May 24 20:24:43.558: INFO: Pod for on the node: tune-sysctls-xlbbr, Cpu: 100, Mem: 209715200 May 24 20:24:43.558: INFO: Pod for on the node: speaker-9vpld, Cpu: 100, Mem: 209715200 May 24 20:24:43.558: INFO: Node: leguer-worker, totalRequestedCPUResource: 500, cpuAllocatableMil: 88000, cpuFraction: 0.005681818181818182 May 24 20:24:43.558: INFO: Node: leguer-worker, totalRequestedMemResource: 356515840, memAllocatableVal: 67430219776, memFraction: 0.005287181936887182 May 24 20:24:43.558: INFO: ComputeCPUMemFraction for node: leguer-worker2 May 24 20:24:43.574: INFO: Pod for on the node: create-loop-devs-nbf25, Cpu: 100, Mem: 209715200 May 24 20:24:43.574: INFO: Pod for on the node: kindnet-kx9mk, Cpu: 100, Mem: 52428800 May 24 20:24:43.574: INFO: Pod for on the node: kube-multus-ds-n48bs, Cpu: 100, Mem: 52428800 May 24 20:24:43.574: INFO: Pod for on the node: kube-proxy-mp68m, Cpu: 100, Mem: 209715200 May 24 20:24:43.574: INFO: Pod for on the node: tune-sysctls-vjdll, Cpu: 100, Mem: 209715200 May 24 20:24:43.574: INFO: Pod for on the node: controller-675995489c-h2wms, Cpu: 100, Mem: 209715200 May 24 20:24:43.574: INFO: Pod for on the node: speaker-55zcr, Cpu: 100, Mem: 209715200 May 24 20:24:43.574: INFO: Pod for on the node: contour-6648989f79-2vldk, Cpu: 100, Mem: 209715200 May 24 20:24:43.574: INFO: Pod for on the node: contour-6648989f79-8gz4z, Cpu: 100, Mem: 209715200 May 24 20:24:43.574: INFO: Node: leguer-worker2, totalRequestedCPUResource: 300, cpuAllocatableMil: 88000, cpuFraction: 0.003409090909090909 May 24 20:24:43.574: INFO: Node: leguer-worker2, totalRequestedMemResource: 209715200, memAllocatableVal: 67430219776, memFraction: 0.003110107021698342 May 24 20:24:43.580: INFO: Waiting for running... May 24 20:24:48.636: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 24 20:24:53.686: INFO: ComputeCPUMemFraction for node: leguer-worker May 24 20:24:53.703: INFO: Pod for on the node: coredns-74ff55c5b-fbrvj, Cpu: 100, Mem: 73400320 May 24 20:24:53.703: INFO: Pod for on the node: coredns-74ff55c5b-glnw8, Cpu: 100, Mem: 73400320 May 24 20:24:53.703: INFO: Pod for on the node: create-loop-devs-d9nvq, Cpu: 100, Mem: 209715200 May 24 20:24:53.703: INFO: Pod for on the node: kindnet-svp2q, Cpu: 100, Mem: 52428800 May 24 20:24:53.703: INFO: Pod for on the node: kube-multus-ds-2n6bd, Cpu: 100, Mem: 52428800 May 24 20:24:53.703: INFO: Pod for on the node: kube-proxy-7g274, Cpu: 100, Mem: 209715200 May 24 20:24:53.703: INFO: Pod for on the node: tune-sysctls-xlbbr, Cpu: 100, Mem: 209715200 May 24 20:24:53.703: INFO: Pod for on the node: speaker-9vpld, Cpu: 100, Mem: 209715200 May 24 20:24:53.703: INFO: Pod for on the node: b0e5aac6-aac5-4b17-8f1b-63289cd527c0-0, Cpu: 43500, Mem: 33358594048 May 24 20:24:53.703: INFO: Node: leguer-worker, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 May 24 20:24:53.703: INFO: Node: leguer-worker, totalRequestedMemResource: 33715109888, memAllocatableVal: 67430219776, memFraction: 0.5 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 24 20:24:53.703: INFO: ComputeCPUMemFraction for node: leguer-worker2 May 24 20:24:53.719: INFO: Pod for on the node: create-loop-devs-nbf25, Cpu: 100, Mem: 209715200 May 24 20:24:53.719: INFO: Pod for on the node: kindnet-kx9mk, Cpu: 100, Mem: 52428800 May 24 20:24:53.719: INFO: Pod for on the node: kube-multus-ds-n48bs, Cpu: 100, Mem: 52428800 May 24 20:24:53.719: INFO: Pod for on the node: kube-proxy-mp68m, Cpu: 100, Mem: 209715200 May 24 20:24:53.719: INFO: Pod for on the node: tune-sysctls-vjdll, Cpu: 100, Mem: 209715200 May 24 20:24:53.719: INFO: Pod for on the node: controller-675995489c-h2wms, Cpu: 100, Mem: 209715200 May 24 20:24:53.719: INFO: Pod for on the node: speaker-55zcr, Cpu: 100, Mem: 209715200 May 24 20:24:53.719: INFO: Pod for on the node: contour-6648989f79-2vldk, Cpu: 100, Mem: 209715200 May 24 20:24:53.719: INFO: Pod for on the node: contour-6648989f79-8gz4z, Cpu: 100, Mem: 209715200 May 24 20:24:53.719: INFO: Pod for on the node: 479d3dec-a861-4736-95a5-b138c9a1730f-0, Cpu: 43700, Mem: 33505394688 May 24 20:24:53.719: INFO: Node: leguer-worker2, totalRequestedCPUResource: 44000, cpuAllocatableMil: 88000, cpuFraction: 0.5 May 24 20:24:53.719: INFO: Node: leguer-worker2, totalRequestedMemResource: 33715109888, memAllocatableVal: 67430219776, memFraction: 0.5 STEP: Run a ReplicaSet with 4 replicas on node "leguer-worker" STEP: Verifying if the test-pod lands on node "leguer-worker2" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:352 STEP: removing the label kubernetes.io/e2e-pts-score off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node leguer-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:25:08.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-5542" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:132 • [SLOW TEST:89.208 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:340 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:358 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":12,"completed":10,"skipped":2668,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:802 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:25:08.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 May 24 20:25:08.514: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 20:25:08.523: INFO: Waiting for terminating namespaces to be deleted... May 24 20:25:08.526: INFO: Logging pods the apiserver thinks is on node leguer-worker before test May 24 20:25:08.535: INFO: coredns-74ff55c5b-fbrvj from kube-system started at 2021-05-24 20:14:46 +0000 UTC (1 container statuses recorded) May 24 20:25:08.535: INFO: Container coredns ready: true, restart count 0 May 24 20:25:08.535: INFO: coredns-74ff55c5b-glnw8 from kube-system started at 2021-05-24 20:14:46 +0000 UTC (1 container statuses recorded) May 24 20:25:08.535: INFO: Container coredns ready: true, restart count 0 May 24 20:25:08.535: INFO: create-loop-devs-d9nvq from kube-system started at 2021-05-24 19:57:34 +0000 UTC (1 container statuses recorded) May 24 20:25:08.535: INFO: Container loopdev ready: true, restart count 0 May 24 20:25:08.535: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:25:08.535: INFO: Container kindnet-cni ready: true, restart count 13 May 24 20:25:08.535: INFO: kube-multus-ds-2n6bd from kube-system started at 2021-05-24 19:57:14 +0000 UTC (1 container statuses recorded) May 24 20:25:08.535: INFO: Container kube-multus ready: true, restart count 0 May 24 20:25:08.535: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:25:08.535: INFO: Container kube-proxy ready: true, restart count 0 May 24 20:25:08.535: INFO: tune-sysctls-xlbbr from kube-system started at 2021-05-24 19:57:08 +0000 UTC (1 container statuses recorded) May 24 20:25:08.535: INFO: Container setsysctls ready: true, restart count 0 May 24 20:25:08.535: INFO: speaker-9vpld from metallb-system started at 2021-05-24 19:57:08 +0000 UTC (1 container statuses recorded) May 24 20:25:08.535: INFO: Container speaker ready: true, restart count 0 May 24 20:25:08.535: INFO: rs-e2e-pts-score-4tl9h from sched-priority-5542 started at 2021-05-24 20:24:54 +0000 UTC (1 container statuses recorded) May 24 20:25:08.535: INFO: Container e2e-pts-score ready: true, restart count 0 May 24 20:25:08.535: INFO: rs-e2e-pts-score-67k6s from sched-priority-5542 started at 2021-05-24 20:24:54 +0000 UTC (1 container statuses recorded) May 24 20:25:08.535: INFO: Container e2e-pts-score ready: true, restart count 0 May 24 20:25:08.535: INFO: rs-e2e-pts-score-8pb45 from sched-priority-5542 started at 2021-05-24 20:24:54 +0000 UTC (1 container statuses recorded) May 24 20:25:08.535: INFO: Container e2e-pts-score ready: true, restart count 0 May 24 20:25:08.535: INFO: rs-e2e-pts-score-r9rhv from sched-priority-5542 started at 2021-05-24 20:24:54 +0000 UTC (1 container statuses recorded) May 24 20:25:08.535: INFO: Container e2e-pts-score ready: true, restart count 0 May 24 20:25:08.535: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test May 24 20:25:08.632: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) May 24 20:25:08.632: INFO: Container loopdev ready: true, restart count 0 May 24 20:25:08.632: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:25:08.632: INFO: Container kindnet-cni ready: true, restart count 13 May 24 20:25:08.632: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 20:25:08.632: INFO: Container kube-multus ready: true, restart count 1 May 24 20:25:08.632: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:25:08.632: INFO: Container kube-proxy ready: true, restart count 0 May 24 20:25:08.632: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 20:25:08.632: INFO: Container setsysctls ready: true, restart count 0 May 24 20:25:08.632: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) May 24 20:25:08.632: INFO: Container controller ready: true, restart count 0 May 24 20:25:08.632: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) May 24 20:25:08.632: INFO: Container speaker ready: true, restart count 0 May 24 20:25:08.632: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) May 24 20:25:08.632: INFO: Container contour ready: true, restart count 0 May 24 20:25:08.632: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) May 24 20:25:08.632: INFO: Container contour ready: true, restart count 0 May 24 20:25:08.632: INFO: test-pod from sched-priority-5542 started at 2021-05-24 20:24:55 +0000 UTC (1 container statuses recorded) May 24 20:25:08.632: INFO: Container test-pod ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:788 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:802 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:796 STEP: removing the label kubernetes.io/e2e-pts-filter off the node leguer-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:25:18.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8016" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:10.498 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:784 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:802 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":12,"completed":11,"skipped":3389,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:578 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 20:25:18.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 May 24 20:25:19.014: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 20:25:19.023: INFO: Waiting for terminating namespaces to be deleted... May 24 20:25:19.029: INFO: Logging pods the apiserver thinks is on node leguer-worker before test May 24 20:25:19.038: INFO: coredns-74ff55c5b-fbrvj from kube-system started at 2021-05-24 20:14:46 +0000 UTC (1 container statuses recorded) May 24 20:25:19.038: INFO: Container coredns ready: true, restart count 0 May 24 20:25:19.038: INFO: coredns-74ff55c5b-glnw8 from kube-system started at 2021-05-24 20:14:46 +0000 UTC (1 container statuses recorded) May 24 20:25:19.038: INFO: Container coredns ready: true, restart count 0 May 24 20:25:19.038: INFO: create-loop-devs-d9nvq from kube-system started at 2021-05-24 19:57:34 +0000 UTC (1 container statuses recorded) May 24 20:25:19.038: INFO: Container loopdev ready: true, restart count 0 May 24 20:25:19.038: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:25:19.038: INFO: Container kindnet-cni ready: true, restart count 13 May 24 20:25:19.038: INFO: kube-multus-ds-2n6bd from kube-system started at 2021-05-24 19:57:14 +0000 UTC (1 container statuses recorded) May 24 20:25:19.038: INFO: Container kube-multus ready: true, restart count 0 May 24 20:25:19.038: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:25:19.038: INFO: Container kube-proxy ready: true, restart count 0 May 24 20:25:19.038: INFO: tune-sysctls-xlbbr from kube-system started at 2021-05-24 19:57:08 +0000 UTC (1 container statuses recorded) May 24 20:25:19.038: INFO: Container setsysctls ready: true, restart count 0 May 24 20:25:19.038: INFO: speaker-9vpld from metallb-system started at 2021-05-24 19:57:08 +0000 UTC (1 container statuses recorded) May 24 20:25:19.038: INFO: Container speaker ready: true, restart count 0 May 24 20:25:19.038: INFO: rs-e2e-pts-filter-5vm7r from sched-pred-8016 started at 2021-05-24 20:25:14 +0000 UTC (1 container statuses recorded) May 24 20:25:19.038: INFO: Container e2e-pts-filter ready: true, restart count 0 May 24 20:25:19.038: INFO: rs-e2e-pts-filter-xhhdk from sched-pred-8016 started at 2021-05-24 20:25:14 +0000 UTC (1 container statuses recorded) May 24 20:25:19.038: INFO: Container e2e-pts-filter ready: true, restart count 0 May 24 20:25:19.038: INFO: rs-e2e-pts-score-4tl9h from sched-priority-5542 started at 2021-05-24 20:24:54 +0000 UTC (1 container statuses recorded) May 24 20:25:19.038: INFO: Container e2e-pts-score ready: false, restart count 0 May 24 20:25:19.038: INFO: rs-e2e-pts-score-67k6s from sched-priority-5542 started at 2021-05-24 20:24:54 +0000 UTC (1 container statuses recorded) May 24 20:25:19.038: INFO: Container e2e-pts-score ready: false, restart count 0 May 24 20:25:19.038: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test May 24 20:25:19.047: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) May 24 20:25:19.047: INFO: Container loopdev ready: true, restart count 0 May 24 20:25:19.047: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:25:19.047: INFO: Container kindnet-cni ready: true, restart count 13 May 24 20:25:19.047: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 20:25:19.047: INFO: Container kube-multus ready: true, restart count 1 May 24 20:25:19.047: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) May 24 20:25:19.047: INFO: Container kube-proxy ready: true, restart count 0 May 24 20:25:19.047: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) May 24 20:25:19.047: INFO: Container setsysctls ready: true, restart count 0 May 24 20:25:19.047: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) May 24 20:25:19.047: INFO: Container controller ready: true, restart count 0 May 24 20:25:19.047: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) May 24 20:25:19.047: INFO: Container speaker ready: true, restart count 0 May 24 20:25:19.047: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) May 24 20:25:19.047: INFO: Container contour ready: true, restart count 0 May 24 20:25:19.047: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) May 24 20:25:19.047: INFO: Container contour ready: true, restart count 0 May 24 20:25:19.047: INFO: rs-e2e-pts-filter-7prmd from sched-pred-8016 started at 2021-05-24 20:25:14 +0000 UTC (1 container statuses recorded) May 24 20:25:19.047: INFO: Container e2e-pts-filter ready: true, restart count 0 May 24 20:25:19.047: INFO: rs-e2e-pts-filter-fkb25 from sched-pred-8016 started at 2021-05-24 20:25:14 +0000 UTC (1 container statuses recorded) May 24 20:25:19.047: INFO: Container e2e-pts-filter ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:578 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-86d2336c-f008-461d-aa4e-0994978303d9=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-d7f8f4db-7344-4be7-a54c-b79e256c20fd testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-d7f8f4db-7344-4be7-a54c-b79e256c20fd off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-d7f8f4db-7344-4be7-a54c-b79e256c20fd STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-86d2336c-f008-461d-aa4e-0994978303d9=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 20:25:28.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8450" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:9.169 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:578 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":12,"completed":12,"skipped":4428,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 24 20:25:28.170: INFO: Running AfterSuite actions on all nodes May 24 20:25:28.170: INFO: Running AfterSuite actions on node 1 May 24 20:25:28.171: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":12,"completed":12,"skipped":5655,"failed":0} Ran 12 of 5667 Specs in 516.596 seconds SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 5655 Skipped PASS Ginkgo ran 1 suite in 8m38.31833203s Test Suite Passed