I0614 17:55:01.033237 17 test_context.go:457] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0614 17:55:01.033369 17 e2e.go:129] Starting e2e run "bd7c651f-92c1-4923-aaeb-a58851f61f13" on Ginkgo node 1 {"msg":"Test Suite starting","total":12,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1623693299 - Will randomize all specs Will run 12 of 5668 specs Jun 14 17:55:01.062: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:55:01.066: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 14 17:55:01.096: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 14 17:55:01.150: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 14 17:55:01.151: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 14 17:55:01.151: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 14 17:55:01.166: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Jun 14 17:55:01.166: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 14 17:55:01.166: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) Jun 14 17:55:01.166: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 14 17:55:01.166: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) Jun 14 17:55:01.166: INFO: e2e test version: v1.20.7 Jun 14 17:55:01.174: INFO: kube-apiserver version: v1.20.7 Jun 14 17:55:01.174: INFO: >>> kubeConfig: /root/.kube/config Jun 14 17:55:01.181: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:271 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:55:01.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred Jun 14 17:55:01.225: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 14 17:55:01.233: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jun 14 17:55:01.237: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 14 17:55:01.246: INFO: Waiting for terminating namespaces to be deleted... Jun 14 17:55:01.250: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jun 14 17:55:01.260: INFO: chaos-daemon-dq48n from default started at 2021-06-14 17:38:57 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.260: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 17:55:01.260: INFO: coredns-74ff55c5b-99kjc from kube-system started at 2021-06-14 17:53:11 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.260: INFO: Container coredns ready: true, restart count 0 Jun 14 17:55:01.260: INFO: coredns-74ff55c5b-vc9hb from kube-system started at 2021-06-14 17:53:11 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.260: INFO: Container coredns ready: true, restart count 0 Jun 14 17:55:01.260: INFO: create-loop-devs-pvvjt from kube-system started at 2021-06-14 17:39:27 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.260: INFO: Container loopdev ready: true, restart count 0 Jun 14 17:55:01.260: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.260: INFO: Container kindnet-cni ready: true, restart count 94 Jun 14 17:55:01.260: INFO: kube-multus-ds-5gpqm from kube-system started at 2021-06-14 17:39:01 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.260: INFO: Container kube-multus ready: true, restart count 0 Jun 14 17:55:01.260: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.260: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 17:55:01.260: INFO: tune-sysctls-jldpw from kube-system started at 2021-06-14 17:38:57 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.260: INFO: Container setsysctls ready: true, restart count 0 Jun 14 17:55:01.260: INFO: speaker-7vjtj from metallb-system started at 2021-06-14 17:38:57 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.260: INFO: Container speaker ready: true, restart count 0 Jun 14 17:55:01.260: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jun 14 17:55:01.269: INFO: chaos-controller-manager-69c479c674-ld4jc from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.269: INFO: Container chaos-mesh ready: true, restart count 0 Jun 14 17:55:01.269: INFO: chaos-daemon-2tzpz from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.269: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 17:55:01.269: INFO: dockerd from default started at 2021-05-26 09:12:20 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.269: INFO: Container dockerd ready: true, restart count 0 Jun 14 17:55:01.269: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.269: INFO: Container loopdev ready: true, restart count 0 Jun 14 17:55:01.269: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.269: INFO: Container kindnet-cni ready: true, restart count 149 Jun 14 17:55:01.269: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.269: INFO: Container kube-multus ready: true, restart count 1 Jun 14 17:55:01.269: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.269: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 17:55:01.269: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.269: INFO: Container setsysctls ready: true, restart count 0 Jun 14 17:55:01.269: INFO: chaos-operator-ce-5754fd4b69-zcrd4 from litmus started at 2021-05-26 09:12:47 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.269: INFO: Container chaos-operator ready: true, restart count 0 Jun 14 17:55:01.269: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.269: INFO: Container controller ready: true, restart count 0 Jun 14 17:55:01.269: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.269: INFO: Container speaker ready: true, restart count 0 Jun 14 17:55:01.269: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.269: INFO: Container contour ready: true, restart count 3 Jun 14 17:55:01.269: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) Jun 14 17:55:01.269: INFO: Container contour ready: true, restart count 1 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:216 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:271 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-c093dfbf-67ed-4cb5-b56b-25d5f5c24a1a.16888474f08d2851], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Warning], Name = [filler-pod-c093dfbf-67ed-4cb5-b56b-25d5f5c24a1a.16888474f0f0001d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Normal], Name = [filler-pod-c093dfbf-67ed-4cb5-b56b-25d5f5c24a1a.16888476f0cbd0e9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5174/filler-pod-c093dfbf-67ed-4cb5-b56b-25d5f5c24a1a to leguer-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-c093dfbf-67ed-4cb5-b56b-25d5f5c24a1a.168884770e925355], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.216/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-c093dfbf-67ed-4cb5-b56b-25d5f5c24a1a.168884772556ad0d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c093dfbf-67ed-4cb5-b56b-25d5f5c24a1a.16888477320531a5], Reason = [Created], Message = [Created container filler-pod-c093dfbf-67ed-4cb5-b56b-25d5f5c24a1a] STEP: Considering event: Type = [Normal], Name = [filler-pod-c093dfbf-67ed-4cb5-b56b-25d5f5c24a1a.168884773a1d68ec], Reason = [Started], Message = [Started container filler-pod-c093dfbf-67ed-4cb5-b56b-25d5f5c24a1a] STEP: Considering event: Type = [Normal], Name = [without-label.1688847470182adf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5174/without-label to leguer-worker] STEP: Considering event: Type = [Normal], Name = [without-label.168884748c4bb1e3], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.215/24]] STEP: Considering event: Type = [Normal], Name = [without-label.168884749a41271c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-label.168884749c668894], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16888474a4af6e4b], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16888474ef405dc4], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16888474fe6ea487], Reason = [SandboxChanged], Message = [Pod sandbox changed, it will be killed and re-created.] STEP: Considering event: Type = [Warning], Name = [without-label.1688847506e52911], Reason = [FailedCreatePodSandBox], Message = [Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f69b04bdeda2f4b4fd96a5d45bac535df600c79364088e1079eb2b8ad8db172f": Multus: [sched-pred-5174/without-label]: error getting pod: pods "without-label" not found] STEP: Considering event: Type = [Warning], Name = [additional-podd6d83cd1-f802-4c91-8056-b6117177e7e2.16888477bcef0b69], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Warning], Name = [additional-podd6d83cd1-f802-4c91-8056-b6117177e7e2.16888477bffd605c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient example.com/beardsecond.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:251 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:55:16.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5174" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:15.314 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:211 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:271 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":12,"completed":1,"skipped":1492,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:802 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:55:16.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jun 14 17:55:16.744: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 14 17:55:16.754: INFO: Waiting for terminating namespaces to be deleted... Jun 14 17:55:16.758: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jun 14 17:55:16.767: INFO: chaos-daemon-dq48n from default started at 2021-06-14 17:38:57 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.767: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 17:55:16.767: INFO: coredns-74ff55c5b-99kjc from kube-system started at 2021-06-14 17:53:11 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.767: INFO: Container coredns ready: true, restart count 0 Jun 14 17:55:16.767: INFO: coredns-74ff55c5b-vc9hb from kube-system started at 2021-06-14 17:53:11 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.767: INFO: Container coredns ready: true, restart count 0 Jun 14 17:55:16.767: INFO: create-loop-devs-pvvjt from kube-system started at 2021-06-14 17:39:27 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.767: INFO: Container loopdev ready: true, restart count 0 Jun 14 17:55:16.767: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.767: INFO: Container kindnet-cni ready: true, restart count 94 Jun 14 17:55:16.767: INFO: kube-multus-ds-5gpqm from kube-system started at 2021-06-14 17:39:01 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.767: INFO: Container kube-multus ready: true, restart count 0 Jun 14 17:55:16.767: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.767: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 17:55:16.767: INFO: tune-sysctls-jldpw from kube-system started at 2021-06-14 17:38:57 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.767: INFO: Container setsysctls ready: true, restart count 0 Jun 14 17:55:16.767: INFO: speaker-7vjtj from metallb-system started at 2021-06-14 17:38:57 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.767: INFO: Container speaker ready: true, restart count 0 Jun 14 17:55:16.767: INFO: filler-pod-c093dfbf-67ed-4cb5-b56b-25d5f5c24a1a from sched-pred-5174 started at 2021-06-14 17:55:12 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.768: INFO: Container filler-pod-c093dfbf-67ed-4cb5-b56b-25d5f5c24a1a ready: true, restart count 0 Jun 14 17:55:16.768: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jun 14 17:55:16.777: INFO: chaos-controller-manager-69c479c674-ld4jc from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.777: INFO: Container chaos-mesh ready: true, restart count 0 Jun 14 17:55:16.777: INFO: chaos-daemon-2tzpz from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.777: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 17:55:16.777: INFO: dockerd from default started at 2021-05-26 09:12:20 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.777: INFO: Container dockerd ready: true, restart count 0 Jun 14 17:55:16.777: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.777: INFO: Container loopdev ready: true, restart count 0 Jun 14 17:55:16.777: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.777: INFO: Container kindnet-cni ready: true, restart count 149 Jun 14 17:55:16.777: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.777: INFO: Container kube-multus ready: true, restart count 1 Jun 14 17:55:16.777: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.777: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 17:55:16.777: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.777: INFO: Container setsysctls ready: true, restart count 0 Jun 14 17:55:16.777: INFO: chaos-operator-ce-5754fd4b69-zcrd4 from litmus started at 2021-05-26 09:12:47 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.777: INFO: Container chaos-operator ready: true, restart count 0 Jun 14 17:55:16.777: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.777: INFO: Container controller ready: true, restart count 0 Jun 14 17:55:16.777: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.777: INFO: Container speaker ready: true, restart count 0 Jun 14 17:55:16.777: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.777: INFO: Container contour ready: true, restart count 3 Jun 14 17:55:16.777: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) Jun 14 17:55:16.777: INFO: Container contour ready: true, restart count 1 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:788 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:802 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:796 STEP: removing the label kubernetes.io/e2e-pts-filter off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node leguer-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:55:22.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9307" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:6.386 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:784 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:802 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":12,"completed":2,"skipped":2179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:358 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:55:22.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:135 Jun 14 17:55:22.949: INFO: Waiting up to 1m0s for all nodes to be ready Jun 14 17:56:23.000: INFO: Waiting for terminating namespaces to be deleted... Jun 14 17:56:23.004: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 14 17:56:23.018: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 14 17:56:23.018: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:344 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:358 Jun 14 17:56:27.113: INFO: ComputeCPUMemFraction for node: leguer-worker Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Node: leguer-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Jun 14 17:56:27.113: INFO: Node: leguer-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 Jun 14 17:56:27.113: INFO: ComputeCPUMemFraction for node: leguer-worker2 Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:56:27.113: INFO: Node: leguer-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Jun 14 17:56:27.113: INFO: Node: leguer-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 Jun 14 17:56:27.118: INFO: Waiting for running... Jun 14 17:56:32.174: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 14 17:56:37.243: INFO: ComputeCPUMemFraction for node: leguer-worker Jun 14 17:56:37.243: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.243: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.243: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.243: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.243: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.243: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.243: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.243: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.243: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.243: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.243: INFO: Node: leguer-worker, totalRequestedCPUResource: 439100, cpuAllocatableMil: 88000, cpuFraction: 1 Jun 14 17:56:37.243: INFO: Node: leguer-worker, totalRequestedMemResource: 336207380480, memAllocatableVal: 67430219776, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 14 17:56:37.243: INFO: ComputeCPUMemFraction for node: leguer-worker2 Jun 14 17:56:37.243: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.243: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.243: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.244: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.244: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.244: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.244: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.244: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.244: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.244: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.244: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.244: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.244: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.244: INFO: Pod for on the node: e4636419-7084-482b-9b32-4ce15dea95e5-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:56:37.244: INFO: Node: leguer-worker2, totalRequestedCPUResource: 614700, cpuAllocatableMil: 88000, cpuFraction: 1 Jun 14 17:56:37.244: INFO: Node: leguer-worker2, totalRequestedMemResource: 470648389632, memAllocatableVal: 67430219776, memFraction: 1 STEP: Run a ReplicaSet with 4 replicas on node "leguer-worker" STEP: Verifying if the test-pod lands on node "leguer-worker2" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:352 STEP: removing the label kubernetes.io/e2e-pts-score off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node leguer-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:56:49.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-7028" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:132 • [SLOW TEST:86.427 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:340 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:358 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":12,"completed":3,"skipped":2381,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:489 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:56:49.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jun 14 17:56:49.392: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 14 17:56:49.402: INFO: Waiting for terminating namespaces to be deleted... Jun 14 17:56:49.405: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jun 14 17:56:49.415: INFO: chaos-daemon-dq48n from default started at 2021-06-14 17:38:57 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.415: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 17:56:49.415: INFO: coredns-74ff55c5b-99kjc from kube-system started at 2021-06-14 17:53:11 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.415: INFO: Container coredns ready: true, restart count 0 Jun 14 17:56:49.415: INFO: coredns-74ff55c5b-vc9hb from kube-system started at 2021-06-14 17:53:11 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.415: INFO: Container coredns ready: true, restart count 0 Jun 14 17:56:49.415: INFO: create-loop-devs-pvvjt from kube-system started at 2021-06-14 17:39:27 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.415: INFO: Container loopdev ready: true, restart count 0 Jun 14 17:56:49.415: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.415: INFO: Container kindnet-cni ready: true, restart count 94 Jun 14 17:56:49.415: INFO: kube-multus-ds-5gpqm from kube-system started at 2021-06-14 17:39:01 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.415: INFO: Container kube-multus ready: true, restart count 0 Jun 14 17:56:49.415: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.415: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 17:56:49.415: INFO: tune-sysctls-jldpw from kube-system started at 2021-06-14 17:38:57 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.415: INFO: Container setsysctls ready: true, restart count 0 Jun 14 17:56:49.415: INFO: speaker-7vjtj from metallb-system started at 2021-06-14 17:38:57 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.415: INFO: Container speaker ready: true, restart count 0 Jun 14 17:56:49.415: INFO: rs-e2e-pts-score-dj76t from sched-priority-7028 started at 2021-06-14 17:56:37 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.415: INFO: Container e2e-pts-score ready: true, restart count 0 Jun 14 17:56:49.415: INFO: rs-e2e-pts-score-mdvm2 from sched-priority-7028 started at 2021-06-14 17:56:37 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.415: INFO: Container e2e-pts-score ready: true, restart count 0 Jun 14 17:56:49.415: INFO: rs-e2e-pts-score-pz7pp from sched-priority-7028 started at 2021-06-14 17:56:37 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.415: INFO: Container e2e-pts-score ready: true, restart count 0 Jun 14 17:56:49.415: INFO: rs-e2e-pts-score-tmk6x from sched-priority-7028 started at 2021-06-14 17:56:37 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.415: INFO: Container e2e-pts-score ready: true, restart count 0 Jun 14 17:56:49.415: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jun 14 17:56:49.424: INFO: chaos-controller-manager-69c479c674-ld4jc from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.424: INFO: Container chaos-mesh ready: true, restart count 0 Jun 14 17:56:49.424: INFO: chaos-daemon-2tzpz from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.424: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 17:56:49.424: INFO: dockerd from default started at 2021-05-26 09:12:20 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.424: INFO: Container dockerd ready: true, restart count 0 Jun 14 17:56:49.424: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.424: INFO: Container loopdev ready: true, restart count 0 Jun 14 17:56:49.424: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.424: INFO: Container kindnet-cni ready: true, restart count 149 Jun 14 17:56:49.424: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.424: INFO: Container kube-multus ready: true, restart count 1 Jun 14 17:56:49.424: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.424: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 17:56:49.424: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.424: INFO: Container setsysctls ready: true, restart count 0 Jun 14 17:56:49.424: INFO: chaos-operator-ce-5754fd4b69-zcrd4 from litmus started at 2021-05-26 09:12:47 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.424: INFO: Container chaos-operator ready: true, restart count 0 Jun 14 17:56:49.424: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.424: INFO: Container controller ready: true, restart count 0 Jun 14 17:56:49.424: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.425: INFO: Container speaker ready: true, restart count 0 Jun 14 17:56:49.425: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.425: INFO: Container contour ready: true, restart count 3 Jun 14 17:56:49.425: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.425: INFO: Container contour ready: true, restart count 1 Jun 14 17:56:49.425: INFO: test-pod from sched-priority-7028 started at 2021-06-14 17:56:39 +0000 UTC (1 container statuses recorded) Jun 14 17:56:49.425: INFO: Container test-pod ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:489 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1688848d9e968c82], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.1688848da2e51ca8], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:56:50.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9134" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":12,"completed":4,"skipped":3793,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:122 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:56:50.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jun 14 17:56:50.509: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 14 17:56:50.521: INFO: Waiting for terminating namespaces to be deleted... Jun 14 17:56:50.525: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jun 14 17:56:50.534: INFO: chaos-daemon-dq48n from default started at 2021-06-14 17:38:57 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.534: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 17:56:50.534: INFO: coredns-74ff55c5b-99kjc from kube-system started at 2021-06-14 17:53:11 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.534: INFO: Container coredns ready: true, restart count 0 Jun 14 17:56:50.534: INFO: coredns-74ff55c5b-vc9hb from kube-system started at 2021-06-14 17:53:11 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.534: INFO: Container coredns ready: true, restart count 0 Jun 14 17:56:50.534: INFO: create-loop-devs-pvvjt from kube-system started at 2021-06-14 17:39:27 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.534: INFO: Container loopdev ready: true, restart count 0 Jun 14 17:56:50.534: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.534: INFO: Container kindnet-cni ready: true, restart count 94 Jun 14 17:56:50.534: INFO: kube-multus-ds-5gpqm from kube-system started at 2021-06-14 17:39:01 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.534: INFO: Container kube-multus ready: true, restart count 0 Jun 14 17:56:50.534: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.534: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 17:56:50.534: INFO: tune-sysctls-jldpw from kube-system started at 2021-06-14 17:38:57 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.534: INFO: Container setsysctls ready: true, restart count 0 Jun 14 17:56:50.534: INFO: speaker-7vjtj from metallb-system started at 2021-06-14 17:38:57 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.534: INFO: Container speaker ready: true, restart count 0 Jun 14 17:56:50.534: INFO: rs-e2e-pts-score-dj76t from sched-priority-7028 started at 2021-06-14 17:56:37 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.534: INFO: Container e2e-pts-score ready: true, restart count 0 Jun 14 17:56:50.534: INFO: rs-e2e-pts-score-mdvm2 from sched-priority-7028 started at 2021-06-14 17:56:37 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.534: INFO: Container e2e-pts-score ready: true, restart count 0 Jun 14 17:56:50.534: INFO: rs-e2e-pts-score-pz7pp from sched-priority-7028 started at 2021-06-14 17:56:37 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.534: INFO: Container e2e-pts-score ready: true, restart count 0 Jun 14 17:56:50.534: INFO: rs-e2e-pts-score-tmk6x from sched-priority-7028 started at 2021-06-14 17:56:37 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.534: INFO: Container e2e-pts-score ready: true, restart count 0 Jun 14 17:56:50.534: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jun 14 17:56:50.543: INFO: chaos-controller-manager-69c479c674-ld4jc from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.543: INFO: Container chaos-mesh ready: true, restart count 0 Jun 14 17:56:50.543: INFO: chaos-daemon-2tzpz from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.543: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 17:56:50.543: INFO: dockerd from default started at 2021-05-26 09:12:20 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.543: INFO: Container dockerd ready: true, restart count 0 Jun 14 17:56:50.543: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.543: INFO: Container loopdev ready: true, restart count 0 Jun 14 17:56:50.543: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.543: INFO: Container kindnet-cni ready: true, restart count 149 Jun 14 17:56:50.543: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.543: INFO: Container kube-multus ready: true, restart count 1 Jun 14 17:56:50.543: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.543: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 17:56:50.543: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.543: INFO: Container setsysctls ready: true, restart count 0 Jun 14 17:56:50.543: INFO: chaos-operator-ce-5754fd4b69-zcrd4 from litmus started at 2021-05-26 09:12:47 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.543: INFO: Container chaos-operator ready: true, restart count 0 Jun 14 17:56:50.543: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.543: INFO: Container controller ready: true, restart count 0 Jun 14 17:56:50.543: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.543: INFO: Container speaker ready: true, restart count 0 Jun 14 17:56:50.543: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.543: INFO: Container contour ready: true, restart count 3 Jun 14 17:56:50.543: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.543: INFO: Container contour ready: true, restart count 1 Jun 14 17:56:50.543: INFO: test-pod from sched-priority-7028 started at 2021-06-14 17:56:39 +0000 UTC (1 container statuses recorded) Jun 14 17:56:50.543: INFO: Container test-pod ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:122 Jun 14 17:56:56.666: INFO: Pod chaos-controller-manager-69c479c674-ld4jc requesting local ephemeral resource =0 on Node leguer-worker2 Jun 14 17:56:56.666: INFO: Pod chaos-daemon-2tzpz requesting local ephemeral resource =0 on Node leguer-worker2 Jun 14 17:56:56.666: INFO: Pod chaos-daemon-dq48n requesting local ephemeral resource =0 on Node leguer-worker Jun 14 17:56:56.666: INFO: Pod dockerd requesting local ephemeral resource =0 on Node leguer-worker2 Jun 14 17:56:56.666: INFO: Pod coredns-74ff55c5b-99kjc requesting local ephemeral resource =0 on Node leguer-worker Jun 14 17:56:56.666: INFO: Pod coredns-74ff55c5b-vc9hb requesting local ephemeral resource =0 on Node leguer-worker Jun 14 17:56:56.666: INFO: Pod create-loop-devs-nbf25 requesting local ephemeral resource =0 on Node leguer-worker2 Jun 14 17:56:56.666: INFO: Pod create-loop-devs-pvvjt requesting local ephemeral resource =0 on Node leguer-worker Jun 14 17:56:56.667: INFO: Pod kindnet-kx9mk requesting local ephemeral resource =0 on Node leguer-worker2 Jun 14 17:56:56.667: INFO: Pod kindnet-svp2q requesting local ephemeral resource =0 on Node leguer-worker Jun 14 17:56:56.667: INFO: Pod kube-multus-ds-5gpqm requesting local ephemeral resource =0 on Node leguer-worker Jun 14 17:56:56.667: INFO: Pod kube-multus-ds-n48bs requesting local ephemeral resource =0 on Node leguer-worker2 Jun 14 17:56:56.667: INFO: Pod kube-proxy-7g274 requesting local ephemeral resource =0 on Node leguer-worker Jun 14 17:56:56.667: INFO: Pod kube-proxy-mp68m requesting local ephemeral resource =0 on Node leguer-worker2 Jun 14 17:56:56.667: INFO: Pod tune-sysctls-jldpw requesting local ephemeral resource =0 on Node leguer-worker Jun 14 17:56:56.667: INFO: Pod tune-sysctls-vjdll requesting local ephemeral resource =0 on Node leguer-worker2 Jun 14 17:56:56.667: INFO: Pod chaos-operator-ce-5754fd4b69-zcrd4 requesting local ephemeral resource =0 on Node leguer-worker2 Jun 14 17:56:56.667: INFO: Pod controller-675995489c-h2wms requesting local ephemeral resource =0 on Node leguer-worker2 Jun 14 17:56:56.667: INFO: Pod speaker-55zcr requesting local ephemeral resource =0 on Node leguer-worker2 Jun 14 17:56:56.667: INFO: Pod speaker-7vjtj requesting local ephemeral resource =0 on Node leguer-worker Jun 14 17:56:56.667: INFO: Pod contour-6648989f79-2vldk requesting local ephemeral resource =0 on Node leguer-worker2 Jun 14 17:56:56.667: INFO: Pod contour-6648989f79-8gz4z requesting local ephemeral resource =0 on Node leguer-worker2 Jun 14 17:56:56.667: INFO: Pod rs-e2e-pts-score-dj76t requesting local ephemeral resource =0 on Node leguer-worker Jun 14 17:56:56.667: INFO: Pod rs-e2e-pts-score-mdvm2 requesting local ephemeral resource =0 on Node leguer-worker Jun 14 17:56:56.667: INFO: Pod rs-e2e-pts-score-pz7pp requesting local ephemeral resource =0 on Node leguer-worker Jun 14 17:56:56.667: INFO: Pod rs-e2e-pts-score-tmk6x requesting local ephemeral resource =0 on Node leguer-worker Jun 14 17:56:56.667: INFO: Pod test-pod requesting local ephemeral resource =0 on Node leguer-worker2 Jun 14 17:56:56.667: INFO: Using pod capacity: 47063248896 Jun 14 17:56:56.667: INFO: Node: leguer-worker has local ephemeral resource allocatable: 470632488960 Jun 14 17:56:56.667: INFO: Node: leguer-worker2 has local ephemeral resource allocatable: 470632488960 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one Jun 14 17:56:56.753: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.1688848f4d4bd648], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2562/overcommit-0 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-0.1688848fa0a6dfd9], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.227/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-0.1688848fe2202425], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-0.1688848fe79e3816], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.1688848fff5fb31b], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.1688848f4d7b5ad3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2562/overcommit-1 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-1.1688848fa0ea14f3], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.230/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-1.1688848fe21c496b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-1.1688848fe748bf5d], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.1688849000a5151e], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.1688848f4ff6fb9f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2562/overcommit-10 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-10.1688848fa0e36148], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.101/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-10.1688848fe38afeb0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-10.1688848fe81c43b0], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16888490002a06a2], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.1688848f500101f1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2562/overcommit-11 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-11.1688848fa0e22e15], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.234/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-11.1688848fe28c443c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-11.1688848fe75b5022], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16888490005563ec], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.1688848f503a0a46], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2562/overcommit-12 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-12.1688848fa1090a76], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.102/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-12.1688848fe3d5e564], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-12.1688848fe92d95d0], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16888490006b0eb3], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.1688848f509baba4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2562/overcommit-13 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-13.1688848fa0e3efb2], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.100/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-13.1688848fe3d2fe88], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-13.1688848fe905cd30], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16888490000cf263], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.1688848f50ee4900], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2562/overcommit-14 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-14.1688848fa0e360ed], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.236/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-14.1688848fe1f8ecaf], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-14.1688848fe77fb7bc], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.1688848fff60286c], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.1688848f510ac183], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2562/overcommit-15 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-15.1688848fa106c078], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.235/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-15.1688848fe3b5a482], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-15.1688848fe8b9011c], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16888490002de60e], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.1688848f513c78bb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2562/overcommit-16 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-16.1688848fa1062ee1], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.232/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-16.1688848fe3b2cb6a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-16.1688848fe89a65e8], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16888490004ce96f], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.1688848f517867d0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2562/overcommit-17 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-17.1688848fa10bb0cf], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.103/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-17.1688848fe3acd29f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-17.1688848fe8b2b959], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.168884900029ddeb], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.1688848f51b368b5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2562/overcommit-18 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-18.1688848fa0e1d99e], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.104/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-18.1688848fe20c41d8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-18.1688848fe7b5e82f], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16888490006ab444], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.1688848f51e8e54b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2562/overcommit-19 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-19.1688848fa21824a1], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.105/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-19.1688848fe3df2cfe], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-19.1688848fe91215f8], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16888490006d88a4], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.1688848f4de44db4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2562/overcommit-2 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-2.1688848fa0e2a686], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.231/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-2.1688848fe3af6e30], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-2.1688848fe81a1bcd], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.1688848ffff3153e], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.1688848f4e228f0c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2562/overcommit-3 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.1688848f7d7f150d], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.96/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-3.1688848f98839c6d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-3.1688848fc9a6c083], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.1688848fe2853509], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.1688848f4e65e65d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2562/overcommit-4 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-4.1688848fa0e9def9], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.97/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-4.1688848fe3822230], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-4.1688848fe831207d], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.168884900042f30a], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.1688848f4eb6b54b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2562/overcommit-5 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-5.1688848fa0e48acd], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.233/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-5.1688848fe3a876d2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-5.1688848fe89593ea], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16888490004c7e9a], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.1688848f4edf9a25], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2562/overcommit-6 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-6.1688848fa0a0ffaf], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.228/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-6.1688848fe3ae2fb7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-6.1688848fe8c45f61], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.1688848fffc8b4a5], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.1688848f4f1e16d9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2562/overcommit-7 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-7.1688848fa0e54014], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.99/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-7.1688848fe3d2e098], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-7.1688848fe90153a8], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.1688849000557fc6], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.1688848f4f537168], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2562/overcommit-8 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-8.1688848fa0a0445a], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.229/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-8.1688848fe388c963], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-8.1688848fe7c88405], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16888490002298b7], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.1688848f4f9831cd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2562/overcommit-9 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.1688848fa0e39d6d], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.98/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-9.1688848fe3ae7c74], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-9.1688848fe8997ee4], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16888490005f48b8], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16888491a9c4a0fa], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient ephemeral-storage.] STEP: Considering event: Type = [Warning], Name = [additional-pod.16888491aa4695be], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient ephemeral-storage.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:57:07.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2562" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:17.367 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:122 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":12,"completed":5,"skipped":4201,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:621 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:57:07.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jun 14 17:57:07.883: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 14 17:57:07.892: INFO: Waiting for terminating namespaces to be deleted... Jun 14 17:57:07.895: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jun 14 17:57:07.907: INFO: chaos-daemon-dq48n from default started at 2021-06-14 17:38:57 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.907: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 17:57:07.907: INFO: coredns-74ff55c5b-99kjc from kube-system started at 2021-06-14 17:53:11 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.907: INFO: Container coredns ready: true, restart count 0 Jun 14 17:57:07.907: INFO: coredns-74ff55c5b-vc9hb from kube-system started at 2021-06-14 17:53:11 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.907: INFO: Container coredns ready: true, restart count 0 Jun 14 17:57:07.907: INFO: create-loop-devs-pvvjt from kube-system started at 2021-06-14 17:39:27 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.907: INFO: Container loopdev ready: true, restart count 0 Jun 14 17:57:07.907: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.907: INFO: Container kindnet-cni ready: true, restart count 94 Jun 14 17:57:07.907: INFO: kube-multus-ds-5gpqm from kube-system started at 2021-06-14 17:39:01 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.907: INFO: Container kube-multus ready: true, restart count 0 Jun 14 17:57:07.907: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.907: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 17:57:07.907: INFO: tune-sysctls-jldpw from kube-system started at 2021-06-14 17:38:57 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.907: INFO: Container setsysctls ready: true, restart count 0 Jun 14 17:57:07.907: INFO: speaker-7vjtj from metallb-system started at 2021-06-14 17:38:57 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.907: INFO: Container speaker ready: true, restart count 0 Jun 14 17:57:07.907: INFO: overcommit-0 from sched-pred-2562 started at 2021-06-14 17:56:56 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.907: INFO: Container overcommit-0 ready: true, restart count 0 Jun 14 17:57:07.907: INFO: overcommit-1 from sched-pred-2562 started at 2021-06-14 17:56:56 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.907: INFO: Container overcommit-1 ready: true, restart count 0 Jun 14 17:57:07.907: INFO: overcommit-11 from sched-pred-2562 started at 2021-06-14 17:56:56 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.907: INFO: Container overcommit-11 ready: true, restart count 0 Jun 14 17:57:07.907: INFO: overcommit-14 from sched-pred-2562 started at 2021-06-14 17:56:56 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.907: INFO: Container overcommit-14 ready: true, restart count 0 Jun 14 17:57:07.907: INFO: overcommit-15 from sched-pred-2562 started at 2021-06-14 17:56:56 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.907: INFO: Container overcommit-15 ready: true, restart count 0 Jun 14 17:57:07.907: INFO: overcommit-16 from sched-pred-2562 started at 2021-06-14 17:56:56 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.907: INFO: Container overcommit-16 ready: true, restart count 0 Jun 14 17:57:07.907: INFO: overcommit-2 from sched-pred-2562 started at 2021-06-14 17:56:56 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.907: INFO: Container overcommit-2 ready: true, restart count 0 Jun 14 17:57:07.907: INFO: overcommit-5 from sched-pred-2562 started at 2021-06-14 17:56:56 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.907: INFO: Container overcommit-5 ready: true, restart count 0 Jun 14 17:57:07.907: INFO: overcommit-6 from sched-pred-2562 started at 2021-06-14 17:56:56 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.907: INFO: Container overcommit-6 ready: true, restart count 0 Jun 14 17:57:07.907: INFO: overcommit-8 from sched-pred-2562 started at 2021-06-14 17:56:56 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.907: INFO: Container overcommit-8 ready: true, restart count 0 Jun 14 17:57:07.907: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jun 14 17:57:07.918: INFO: chaos-controller-manager-69c479c674-ld4jc from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container chaos-mesh ready: true, restart count 0 Jun 14 17:57:07.918: INFO: chaos-daemon-2tzpz from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 17:57:07.918: INFO: dockerd from default started at 2021-05-26 09:12:20 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container dockerd ready: true, restart count 0 Jun 14 17:57:07.918: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container loopdev ready: true, restart count 0 Jun 14 17:57:07.918: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container kindnet-cni ready: true, restart count 149 Jun 14 17:57:07.918: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container kube-multus ready: true, restart count 1 Jun 14 17:57:07.918: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 17:57:07.918: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container setsysctls ready: true, restart count 0 Jun 14 17:57:07.918: INFO: chaos-operator-ce-5754fd4b69-zcrd4 from litmus started at 2021-05-26 09:12:47 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container chaos-operator ready: true, restart count 0 Jun 14 17:57:07.918: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container controller ready: true, restart count 0 Jun 14 17:57:07.918: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container speaker ready: true, restart count 0 Jun 14 17:57:07.918: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container contour ready: true, restart count 3 Jun 14 17:57:07.918: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container contour ready: true, restart count 1 Jun 14 17:57:07.918: INFO: overcommit-10 from sched-pred-2562 started at 2021-06-14 17:56:56 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container overcommit-10 ready: true, restart count 0 Jun 14 17:57:07.918: INFO: overcommit-12 from sched-pred-2562 started at 2021-06-14 17:56:56 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container overcommit-12 ready: true, restart count 0 Jun 14 17:57:07.918: INFO: overcommit-13 from sched-pred-2562 started at 2021-06-14 17:56:56 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container overcommit-13 ready: true, restart count 0 Jun 14 17:57:07.918: INFO: overcommit-17 from sched-pred-2562 started at 2021-06-14 17:56:56 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container overcommit-17 ready: true, restart count 0 Jun 14 17:57:07.918: INFO: overcommit-18 from sched-pred-2562 started at 2021-06-14 17:56:56 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container overcommit-18 ready: true, restart count 0 Jun 14 17:57:07.918: INFO: overcommit-19 from sched-pred-2562 started at 2021-06-14 17:56:56 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container overcommit-19 ready: true, restart count 0 Jun 14 17:57:07.918: INFO: overcommit-3 from sched-pred-2562 started at 2021-06-14 17:56:56 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container overcommit-3 ready: true, restart count 0 Jun 14 17:57:07.918: INFO: overcommit-4 from sched-pred-2562 started at 2021-06-14 17:56:56 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container overcommit-4 ready: true, restart count 0 Jun 14 17:57:07.918: INFO: overcommit-7 from sched-pred-2562 started at 2021-06-14 17:56:56 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container overcommit-7 ready: true, restart count 0 Jun 14 17:57:07.918: INFO: overcommit-9 from sched-pred-2562 started at 2021-06-14 17:56:56 +0000 UTC (1 container statuses recorded) Jun 14 17:57:07.918: INFO: Container overcommit-9 ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:621 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-d752b8c9-7527-48ac-bd78-c80e3c78cb97=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-d7a3d33e-0065-443d-a8f7-2a4e51b32d65 testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16888491f1818d22], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3180/without-toleration to leguer-worker] STEP: Considering event: Type = [Normal], Name = [without-toleration.168884920f85babf], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.237/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.168884921c1f9dc2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-toleration.1688849222653ea4], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.1688849229e98f2e], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.1688849269d6df67], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.168884926c3a7888], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity, 1 node(s) had taint {kubernetes.io/e2e-taint-key-d752b8c9-7527-48ac-bd78-c80e3c78cb97: testing-taint-value}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.168884926c9eeb4d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity, 1 node(s) had taint {kubernetes.io/e2e-taint-key-d752b8c9-7527-48ac-bd78-c80e3c78cb97: testing-taint-value}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.168884926c3a7888], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity, 1 node(s) had taint {kubernetes.io/e2e-taint-key-d752b8c9-7527-48ac-bd78-c80e3c78cb97: testing-taint-value}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.168884926c9eeb4d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity, 1 node(s) had taint {kubernetes.io/e2e-taint-key-d752b8c9-7527-48ac-bd78-c80e3c78cb97: testing-taint-value}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16888491f1818d22], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3180/without-toleration to leguer-worker] STEP: Considering event: Type = [Normal], Name = [without-toleration.168884920f85babf], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.237/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.168884921c1f9dc2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-toleration.1688849222653ea4], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.1688849229e98f2e], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.1688849269d6df67], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-d752b8c9-7527-48ac-bd78-c80e3c78cb97=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.168884930ae34b43], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3180/still-no-tolerations to leguer-worker] STEP: removing the label kubernetes.io/e2e-label-key-d7a3d33e-0065-443d-a8f7-2a4e51b32d65 off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-d7a3d33e-0065-443d-a8f7-2a4e51b32d65 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-d752b8c9-7527-48ac-bd78-c80e3c78cb97=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:57:13.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3180" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:5.302 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:621 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":12,"completed":6,"skipped":4428,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:57:13.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 14 17:57:13.187: INFO: Waiting up to 1m0s for all nodes to be ready Jun 14 17:58:13.240: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node leguer-worker. STEP: Apply 10 fake resource to node leguer-worker2. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node leguer-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 17:58:39.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5835" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:86.391 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":12,"completed":7,"skipped":4535,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:238 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 17:58:39.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:135 Jun 14 17:58:39.586: INFO: Waiting up to 1m0s for all nodes to be ready Jun 14 17:59:39.639: INFO: Waiting for terminating namespaces to be deleted... Jun 14 17:59:39.643: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 14 17:59:39.657: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 14 17:59:39.657: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:238 Jun 14 17:59:39.674: INFO: ComputeCPUMemFraction for node: leguer-worker Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Node: leguer-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Jun 14 17:59:39.674: INFO: Node: leguer-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 Jun 14 17:59:39.674: INFO: ComputeCPUMemFraction for node: leguer-worker2 Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 17:59:39.674: INFO: Node: leguer-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Jun 14 17:59:39.674: INFO: Node: leguer-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 Jun 14 17:59:39.684: INFO: Waiting for running... Jun 14 17:59:44.741: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 14 17:59:49.808: INFO: ComputeCPUMemFraction for node: leguer-worker Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Node: leguer-worker, totalRequestedCPUResource: 439100, cpuAllocatableMil: 88000, cpuFraction: 1 Jun 14 17:59:49.808: INFO: Node: leguer-worker, totalRequestedMemResource: 336207380480, memAllocatableVal: 67430219776, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 14 17:59:49.808: INFO: ComputeCPUMemFraction for node: leguer-worker2 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Pod for on the node: d07cbd47-7b59-44b3-98bf-cea2f2454f1e-0, Cpu: 43900, Mem: 33610252288 Jun 14 17:59:49.808: INFO: Node: leguer-worker2, totalRequestedCPUResource: 614700, cpuAllocatableMil: 88000, cpuFraction: 1 Jun 14 17:59:49.808: INFO: Node: leguer-worker2, totalRequestedMemResource: 470648389632, memAllocatableVal: 67430219776, memFraction: 1 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-7886 to 1 STEP: Verify the pods should not scheduled to the node: leguer-worker STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-7886, will wait for the garbage collector to delete the pods Jun 14 17:59:56.061: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 5.366107ms Jun 14 17:59:56.861: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 800.269803ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:00:00.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-7886" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:132 • [SLOW TEST:80.937 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:238 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":12,"completed":8,"skipped":5169,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:154 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:00:00.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:135 Jun 14 18:00:00.523: INFO: Waiting up to 1m0s for all nodes to be ready Jun 14 18:01:00.573: INFO: Waiting for terminating namespaces to be deleted... Jun 14 18:01:00.577: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 14 18:01:00.593: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 14 18:01:00.593: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:154 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname Jun 14 18:01:02.638: INFO: ComputeCPUMemFraction for node: leguer-worker Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Node: leguer-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Jun 14 18:01:02.638: INFO: Node: leguer-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 Jun 14 18:01:02.638: INFO: ComputeCPUMemFraction for node: leguer-worker2 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:02.638: INFO: Node: leguer-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Jun 14 18:01:02.638: INFO: Node: leguer-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 Jun 14 18:01:02.644: INFO: Waiting for running... Jun 14 18:01:07.700: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 14 18:01:12.768: INFO: ComputeCPUMemFraction for node: leguer-worker Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Node: leguer-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Jun 14 18:01:12.769: INFO: Node: leguer-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 14 18:01:12.769: INFO: ComputeCPUMemFraction for node: leguer-worker2 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 14 18:01:12.769: INFO: Node: leguer-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Jun 14 18:01:12.769: INFO: Node: leguer-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:01:18.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-2486" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:132 • [SLOW TEST:78.332 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:154 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":12,"completed":9,"skipped":5207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:530 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:01:18.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jun 14 18:01:18.858: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 14 18:01:18.867: INFO: Waiting for terminating namespaces to be deleted... Jun 14 18:01:18.871: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jun 14 18:01:18.880: INFO: chaos-daemon-dq48n from default started at 2021-06-14 17:38:57 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.880: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 18:01:18.880: INFO: coredns-74ff55c5b-99kjc from kube-system started at 2021-06-14 17:53:11 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.881: INFO: Container coredns ready: true, restart count 0 Jun 14 18:01:18.881: INFO: coredns-74ff55c5b-vc9hb from kube-system started at 2021-06-14 17:53:11 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.881: INFO: Container coredns ready: true, restart count 0 Jun 14 18:01:18.881: INFO: create-loop-devs-pvvjt from kube-system started at 2021-06-14 17:39:27 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.881: INFO: Container loopdev ready: true, restart count 0 Jun 14 18:01:18.881: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.881: INFO: Container kindnet-cni ready: true, restart count 94 Jun 14 18:01:18.881: INFO: kube-multus-ds-5gpqm from kube-system started at 2021-06-14 17:39:01 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.881: INFO: Container kube-multus ready: true, restart count 0 Jun 14 18:01:18.881: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.881: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 18:01:18.881: INFO: tune-sysctls-jldpw from kube-system started at 2021-06-14 17:38:57 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.881: INFO: Container setsysctls ready: true, restart count 0 Jun 14 18:01:18.881: INFO: speaker-7vjtj from metallb-system started at 2021-06-14 17:38:57 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.881: INFO: Container speaker ready: true, restart count 0 Jun 14 18:01:18.881: INFO: pod-with-label-security-s1 from sched-priority-2486 started at 2021-06-14 18:01:00 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.881: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 Jun 14 18:01:18.881: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jun 14 18:01:18.889: INFO: chaos-controller-manager-69c479c674-ld4jc from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.889: INFO: Container chaos-mesh ready: true, restart count 0 Jun 14 18:01:18.889: INFO: chaos-daemon-2tzpz from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.889: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 18:01:18.889: INFO: dockerd from default started at 2021-05-26 09:12:20 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.889: INFO: Container dockerd ready: true, restart count 0 Jun 14 18:01:18.889: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.889: INFO: Container loopdev ready: true, restart count 0 Jun 14 18:01:18.889: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.889: INFO: Container kindnet-cni ready: true, restart count 149 Jun 14 18:01:18.889: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.889: INFO: Container kube-multus ready: true, restart count 1 Jun 14 18:01:18.889: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.889: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 18:01:18.889: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.889: INFO: Container setsysctls ready: true, restart count 0 Jun 14 18:01:18.889: INFO: chaos-operator-ce-5754fd4b69-zcrd4 from litmus started at 2021-05-26 09:12:47 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.889: INFO: Container chaos-operator ready: true, restart count 0 Jun 14 18:01:18.890: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.890: INFO: Container controller ready: true, restart count 0 Jun 14 18:01:18.890: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.890: INFO: Container speaker ready: true, restart count 0 Jun 14 18:01:18.890: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.890: INFO: Container contour ready: true, restart count 3 Jun 14 18:01:18.890: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.890: INFO: Container contour ready: true, restart count 1 Jun 14 18:01:18.890: INFO: pod-with-pod-antiaffinity from sched-priority-2486 started at 2021-06-14 18:01:12 +0000 UTC (1 container statuses recorded) Jun 14 18:01:18.890: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:530 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-7c6bf2aa-ee03-470b-96a0-6733cc353025 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-7c6bf2aa-ee03-470b-96a0-6733cc353025 off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-7c6bf2aa-ee03-470b-96a0-6733cc353025 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:01:22.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4608" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":12,"completed":10,"skipped":5333,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:578 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:01:22.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jun 14 18:01:23.012: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 14 18:01:23.021: INFO: Waiting for terminating namespaces to be deleted... Jun 14 18:01:23.025: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jun 14 18:01:23.034: INFO: chaos-daemon-dq48n from default started at 2021-06-14 17:38:57 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.034: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 18:01:23.034: INFO: coredns-74ff55c5b-99kjc from kube-system started at 2021-06-14 17:53:11 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.034: INFO: Container coredns ready: true, restart count 0 Jun 14 18:01:23.034: INFO: coredns-74ff55c5b-vc9hb from kube-system started at 2021-06-14 17:53:11 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.034: INFO: Container coredns ready: true, restart count 0 Jun 14 18:01:23.034: INFO: create-loop-devs-pvvjt from kube-system started at 2021-06-14 17:39:27 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.034: INFO: Container loopdev ready: true, restart count 0 Jun 14 18:01:23.034: INFO: kindnet-svp2q from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.034: INFO: Container kindnet-cni ready: true, restart count 94 Jun 14 18:01:23.034: INFO: kube-multus-ds-5gpqm from kube-system started at 2021-06-14 17:39:01 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.034: INFO: Container kube-multus ready: true, restart count 0 Jun 14 18:01:23.034: INFO: kube-proxy-7g274 from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.034: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 18:01:23.034: INFO: tune-sysctls-jldpw from kube-system started at 2021-06-14 17:38:57 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.034: INFO: Container setsysctls ready: true, restart count 0 Jun 14 18:01:23.034: INFO: speaker-7vjtj from metallb-system started at 2021-06-14 17:38:57 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.034: INFO: Container speaker ready: true, restart count 0 Jun 14 18:01:23.034: INFO: with-labels from sched-pred-4608 started at 2021-06-14 18:01:20 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.034: INFO: Container with-labels ready: true, restart count 0 Jun 14 18:01:23.034: INFO: pod-with-label-security-s1 from sched-priority-2486 started at 2021-06-14 18:01:00 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.034: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 Jun 14 18:01:23.034: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jun 14 18:01:23.042: INFO: chaos-controller-manager-69c479c674-ld4jc from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.042: INFO: Container chaos-mesh ready: true, restart count 0 Jun 14 18:01:23.042: INFO: chaos-daemon-2tzpz from default started at 2021-05-26 09:15:28 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.042: INFO: Container chaos-daemon ready: true, restart count 0 Jun 14 18:01:23.042: INFO: dockerd from default started at 2021-05-26 09:12:20 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.042: INFO: Container dockerd ready: true, restart count 0 Jun 14 18:01:23.042: INFO: create-loop-devs-nbf25 from kube-system started at 2021-05-22 08:23:43 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.042: INFO: Container loopdev ready: true, restart count 0 Jun 14 18:01:23.042: INFO: kindnet-kx9mk from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.042: INFO: Container kindnet-cni ready: true, restart count 149 Jun 14 18:01:23.042: INFO: kube-multus-ds-n48bs from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.042: INFO: Container kube-multus ready: true, restart count 1 Jun 14 18:01:23.042: INFO: kube-proxy-mp68m from kube-system started at 2021-05-22 08:23:37 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.042: INFO: Container kube-proxy ready: true, restart count 0 Jun 14 18:01:23.042: INFO: tune-sysctls-vjdll from kube-system started at 2021-05-22 08:23:44 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.042: INFO: Container setsysctls ready: true, restart count 0 Jun 14 18:01:23.042: INFO: chaos-operator-ce-5754fd4b69-zcrd4 from litmus started at 2021-05-26 09:12:47 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.042: INFO: Container chaos-operator ready: true, restart count 0 Jun 14 18:01:23.042: INFO: controller-675995489c-h2wms from metallb-system started at 2021-05-22 08:23:59 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.042: INFO: Container controller ready: true, restart count 0 Jun 14 18:01:23.042: INFO: speaker-55zcr from metallb-system started at 2021-05-22 08:23:57 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.042: INFO: Container speaker ready: true, restart count 0 Jun 14 18:01:23.042: INFO: contour-6648989f79-2vldk from projectcontour started at 2021-05-22 08:24:02 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.042: INFO: Container contour ready: true, restart count 3 Jun 14 18:01:23.042: INFO: contour-6648989f79-8gz4z from projectcontour started at 2021-05-22 10:05:00 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.042: INFO: Container contour ready: true, restart count 1 Jun 14 18:01:23.042: INFO: pod-with-pod-antiaffinity from sched-priority-2486 started at 2021-06-14 18:01:12 +0000 UTC (1 container statuses recorded) Jun 14 18:01:23.042: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:578 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-f501e9a6-4cb4-4ef7-b174-69278a846e3d=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-2b014a0e-6175-4f9f-967e-a32b1072114a testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-2b014a0e-6175-4f9f-967e-a32b1072114a off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-2b014a0e-6175-4f9f-967e-a32b1072114a STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-f501e9a6-4cb4-4ef7-b174-69278a846e3d=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:01:27.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6307" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":12,"completed":11,"skipped":5595,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:302 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jun 14 18:01:27.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:135 Jun 14 18:01:27.193: INFO: Waiting up to 1m0s for all nodes to be ready Jun 14 18:02:27.242: INFO: Waiting for terminating namespaces to be deleted... Jun 14 18:02:27.246: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 14 18:02:27.260: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 14 18:02:27.260: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:302 Jun 14 18:02:27.275: INFO: ComputeCPUMemFraction for node: leguer-worker Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Node: leguer-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Jun 14 18:02:27.275: INFO: Node: leguer-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 Jun 14 18:02:27.275: INFO: ComputeCPUMemFraction for node: leguer-worker2 Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Pod for on the node: envoy-nwdcq, Cpu: 200, Mem: 419430400 Jun 14 18:02:27.275: INFO: Node: leguer-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Jun 14 18:02:27.275: INFO: Node: leguer-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 Jun 14 18:02:27.286: INFO: Waiting for running... Jun 14 18:02:32.345: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 14 18:02:37.413: INFO: ComputeCPUMemFraction for node: leguer-worker Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Node: leguer-worker, totalRequestedCPUResource: 439100, cpuAllocatableMil: 88000, cpuFraction: 1 Jun 14 18:02:37.413: INFO: Node: leguer-worker, totalRequestedMemResource: 336207380480, memAllocatableVal: 67430219776, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 14 18:02:37.413: INFO: ComputeCPUMemFraction for node: leguer-worker2 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Pod for on the node: af2c51a8-9b19-4460-b809-3a04cd3b55e8-0, Cpu: 43900, Mem: 33610252288 Jun 14 18:02:37.413: INFO: Node: leguer-worker2, totalRequestedCPUResource: 614700, cpuAllocatableMil: 88000, cpuFraction: 1 Jun 14 18:02:37.413: INFO: Node: leguer-worker2, totalRequestedMemResource: 470648389632, memAllocatableVal: 67430219776, memFraction: 1 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-81853233-6bb7-4467-bb81-44642064cd8e=testing-taint-value-b563e9da-0bb2-47f6-ade0-418f87ae4580:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-aeb4d650-3f2f-4dcd-8588-bf9a62d648f2=testing-taint-value-26bddda4-e467-4aa6-a507-003d6350bedf:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-1ebc50df-ed1b-45e9-a9f8-3bc288172c02=testing-taint-value-54dfc1f6-c339-4203-927e-2589f9e3c844:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-2892cd86-a22d-4c1e-9417-619b96502475=testing-taint-value-f12016ee-753c-4601-9e66-2bd472e872d9:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-b73b1b17-4314-4124-b449-d6b12e003277=testing-taint-value-01ea7e95-9208-47a6-8404-cbc4e66ec238:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-e93eff4b-7cdc-4b1b-b5c5-0ce1a8f7e5f2=testing-taint-value-150236a0-c69c-4502-94e2-3ec3994b2284:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-2e7ad7bb-0680-41dd-9773-4ffb2a490ee6=testing-taint-value-ff4ef46c-67d7-46ec-b2cd-10a11fdb09c7:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-66601743-f23c-4d23-b11d-cb9f289bd0f5=testing-taint-value-869e5a4d-3332-43d3-ba29-c624c771cc8f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-a2b96e28-cf01-40c4-9264-0f3e4e6dc483=testing-taint-value-1ac55e32-5af4-45cf-b597-14255bb7dea6:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-afbd9e2e-2efd-4493-9190-7123797cd71e=testing-taint-value-f081403c-10e7-4295-88c5-622431405f0b:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-40215234-cbf0-442a-847a-0109a8867c86=testing-taint-value-857677e3-f928-43bf-ba73-bf1adb5696b4:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-9f36a5f6-db88-411a-b3d4-2f3aa9b50ef0=testing-taint-value-92dd2885-49ab-4136-972a-5f76e4830247:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-66833784-588d-4b5b-af2f-cb2a6dd47830=testing-taint-value-1ada66ba-e629-4342-b2ee-f3c2314650f9:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-b7c082c5-9700-45a2-856f-34d870acb083=testing-taint-value-6cd39bb8-3d59-4e24-b418-cc2a33a05f63:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-b8f52c12-e1fc-4ddd-8641-a18fad17f46f=testing-taint-value-cdcd4eb4-4072-4154-aa47-2b4a57c98106:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-15c14d2b-7217-42fa-99d4-f394882e5553=testing-taint-value-af4285ca-2c7c-440c-8f73-61bc9759865f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-12314413-678d-4f1a-bf3c-44cd7e1f30a2=testing-taint-value-a605237b-0349-4213-80ea-d91742bcfdde:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-445de236-113c-43ad-bc0f-d3ef59a07440=testing-taint-value-c5f7d6fd-73e9-49a5-84a6-1f05e89e7b06:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-c8081bc7-393a-4c9f-9d53-9366d8acf417=testing-taint-value-4d359f58-7dbe-412f-baad-389eaf827532:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-ab986f83-f655-42a2-9d78-2aaa9b65e796=testing-taint-value-ebce3c7f-9aec-4a43-b7cb-0513fac91269:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-ab986f83-f655-42a2-9d78-2aaa9b65e796=testing-taint-value-ebce3c7f-9aec-4a43-b7cb-0513fac91269:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-c8081bc7-393a-4c9f-9d53-9366d8acf417=testing-taint-value-4d359f58-7dbe-412f-baad-389eaf827532:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-445de236-113c-43ad-bc0f-d3ef59a07440=testing-taint-value-c5f7d6fd-73e9-49a5-84a6-1f05e89e7b06:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-12314413-678d-4f1a-bf3c-44cd7e1f30a2=testing-taint-value-a605237b-0349-4213-80ea-d91742bcfdde:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-15c14d2b-7217-42fa-99d4-f394882e5553=testing-taint-value-af4285ca-2c7c-440c-8f73-61bc9759865f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-b8f52c12-e1fc-4ddd-8641-a18fad17f46f=testing-taint-value-cdcd4eb4-4072-4154-aa47-2b4a57c98106:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-b7c082c5-9700-45a2-856f-34d870acb083=testing-taint-value-6cd39bb8-3d59-4e24-b418-cc2a33a05f63:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-66833784-588d-4b5b-af2f-cb2a6dd47830=testing-taint-value-1ada66ba-e629-4342-b2ee-f3c2314650f9:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-9f36a5f6-db88-411a-b3d4-2f3aa9b50ef0=testing-taint-value-92dd2885-49ab-4136-972a-5f76e4830247:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-40215234-cbf0-442a-847a-0109a8867c86=testing-taint-value-857677e3-f928-43bf-ba73-bf1adb5696b4:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-afbd9e2e-2efd-4493-9190-7123797cd71e=testing-taint-value-f081403c-10e7-4295-88c5-622431405f0b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-a2b96e28-cf01-40c4-9264-0f3e4e6dc483=testing-taint-value-1ac55e32-5af4-45cf-b597-14255bb7dea6:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-66601743-f23c-4d23-b11d-cb9f289bd0f5=testing-taint-value-869e5a4d-3332-43d3-ba29-c624c771cc8f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-2e7ad7bb-0680-41dd-9773-4ffb2a490ee6=testing-taint-value-ff4ef46c-67d7-46ec-b2cd-10a11fdb09c7:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-e93eff4b-7cdc-4b1b-b5c5-0ce1a8f7e5f2=testing-taint-value-150236a0-c69c-4502-94e2-3ec3994b2284:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-b73b1b17-4314-4124-b449-d6b12e003277=testing-taint-value-01ea7e95-9208-47a6-8404-cbc4e66ec238:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-2892cd86-a22d-4c1e-9417-619b96502475=testing-taint-value-f12016ee-753c-4601-9e66-2bd472e872d9:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-1ebc50df-ed1b-45e9-a9f8-3bc288172c02=testing-taint-value-54dfc1f6-c339-4203-927e-2589f9e3c844:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-aeb4d650-3f2f-4dcd-8588-bf9a62d648f2=testing-taint-value-26bddda4-e467-4aa6-a507-003d6350bedf:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-81853233-6bb7-4467-bb81-44642064cd8e=testing-taint-value-b563e9da-0bb2-47f6-ade0-418f87ae4580:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jun 14 18:02:49.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-2478" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:132 • [SLOW TEST:82.102 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:302 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":12,"completed":12,"skipped":5631,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSJun 14 18:02:49.265: INFO: Running AfterSuite actions on all nodes Jun 14 18:02:49.265: INFO: Running AfterSuite actions on node 1 Jun 14 18:02:49.265: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":12,"completed":12,"skipped":5656,"failed":0} Ran 12 of 5668 Specs in 468.208 seconds SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 5656 Skipped PASS Ginkgo ran 1 suite in 7m49.954559068s Test Suite Passed