I0903 14:34:44.002592 18 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0903 14:34:44.002764 18 e2e.go:129] Starting e2e run "fb2a7aa4-b80d-4188-ac8b-c9f2d73cd3ed" on Ginkgo node 1 {"msg":"Test Suite starting","total":12,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1630679682 - Will randomize all specs Will run 12 of 5484 specs Sep 3 14:34:44.026: INFO: >>> kubeConfig: /root/.kube/config Sep 3 14:34:44.030: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Sep 3 14:34:44.064: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Sep 3 14:34:44.104: INFO: 18 / 18 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Sep 3 14:34:44.104: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Sep 3 14:34:44.104: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Sep 3 14:34:44.118: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) Sep 3 14:34:44.118: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Sep 3 14:34:44.118: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Sep 3 14:34:44.118: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) Sep 3 14:34:44.118: INFO: e2e test version: v1.19.11 Sep 3 14:34:44.120: INFO: kube-apiserver version: v1.19.11 Sep 3 14:34:44.120: INFO: >>> kubeConfig: /root/.kube/config Sep 3 14:34:44.125: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:34:44.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption Sep 3 14:34:44.157: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Sep 3 14:34:44.165: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 3 14:34:44.178: INFO: Waiting up to 1m0s for all nodes to be ready Sep 3 14:35:44.207: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:307 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node capi-kali-md-0-76b6798f7f-5n8xl. STEP: Apply 10 fake resource to node capi-kali-md-0-76b6798f7f-7jvhm. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:325 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node capi-kali-md-0-76b6798f7f-5n8xl STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node capi-kali-md-0-76b6798f7f-7jvhm STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:36:08.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-617" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:84.644 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:301 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":12,"completed":1,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:36:08.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 3 14:36:08.799: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 3 14:36:08.807: INFO: Waiting for terminating namespaces to be deleted... Sep 3 14:36:08.810: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-5n8xl before test Sep 3 14:36:08.817: INFO: chaos-daemon-tzn7z from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:36:08.817: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:36:08.817: INFO: coredns-f9fd979d6-qdhsv from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:36:08.817: INFO: Container coredns ready: true, restart count 0 Sep 3 14:36:08.817: INFO: create-loop-devs-qjl7t from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:36:08.817: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:36:08.817: INFO: kindnet-55d6f from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:36:08.817: INFO: Container kindnet-cni ready: true, restart count 12 Sep 3 14:36:08.817: INFO: kube-proxy-lqr9t from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:36:08.817: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:36:08.817: INFO: tune-sysctls-wz9ls from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:36:08.817: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:36:08.817: INFO: chaos-operator-ce-5754fd4b69-crx4p from litmus started at 2021-08-31 13:03:04 +0000 UTC (1 container statuses recorded) Sep 3 14:36:08.817: INFO: Container chaos-operator ready: true, restart count 0 Sep 3 14:36:08.817: INFO: high from sched-preemption-617 started at 2021-09-03 14:35:48 +0000 UTC (1 container statuses recorded) Sep 3 14:36:08.817: INFO: Container high ready: true, restart count 0 Sep 3 14:36:08.817: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-7jvhm before test Sep 3 14:36:09.029: INFO: chaos-controller-manager-69c479c674-2scf8 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:36:09.029: INFO: Container chaos-mesh ready: true, restart count 0 Sep 3 14:36:09.029: INFO: chaos-daemon-6lv64 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:36:09.029: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:36:09.029: INFO: dockerd from default started at 2021-08-31 13:02:43 +0000 UTC (1 container statuses recorded) Sep 3 14:36:09.029: INFO: Container dockerd ready: true, restart count 0 Sep 3 14:36:09.029: INFO: coredns-f9fd979d6-45cv5 from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:36:09.029: INFO: Container coredns ready: true, restart count 0 Sep 3 14:36:09.029: INFO: create-loop-devs-4jkpj from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:36:09.029: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:36:09.029: INFO: kindnet-7cmgn from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:36:09.029: INFO: Container kindnet-cni ready: true, restart count 16 Sep 3 14:36:09.029: INFO: kube-proxy-h8v9x from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:36:09.029: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:36:09.029: INFO: tune-sysctls-mv2h6 from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:36:09.029: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:36:09.029: INFO: local-path-provisioner-556d4466c8-khwq6 from local-path-storage started at 2021-08-30 14:58:21 +0000 UTC (1 container statuses recorded) Sep 3 14:36:09.029: INFO: Container local-path-provisioner ready: true, restart count 0 Sep 3 14:36:09.029: INFO: low-1 from sched-preemption-617 started at 2021-09-03 14:35:50 +0000 UTC (1 container statuses recorded) Sep 3 14:36:09.029: INFO: Container low-1 ready: true, restart count 0 Sep 3 14:36:09.029: INFO: medium from sched-preemption-617 started at 2021-09-03 14:36:03 +0000 UTC (1 container statuses recorded) Sep 3 14:36:09.029: INFO: Container medium ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-33e15868-c609-4e7e-a2f3-4140bbddc702=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-3f7c7a33-3959-4ce1-950c-eda2749d56e2 testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16a1569c382010c2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4231/without-toleration to capi-kali-md-0-76b6798f7f-5n8xl] STEP: Considering event: Type = [Normal], Name = [without-toleration.16a1569c5fdcfd8b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-toleration.16a1569c856644c4], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16a1569c96427bc7], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16a1569cb05849ef], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16a1569cb2ca7019], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-33e15868-c609-4e7e-a2f3-4140bbddc702: testing-taint-value}, that the pod didn't tolerate, 2 node(s) didn't match node selector.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16a1569cd2519e67], Reason = [SandboxChanged], Message = [Pod sandbox changed, it will be killed and re-created.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16a1569cb2ca7019], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-33e15868-c609-4e7e-a2f3-4140bbddc702: testing-taint-value}, that the pod didn't tolerate, 2 node(s) didn't match node selector.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16a1569c382010c2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4231/without-toleration to capi-kali-md-0-76b6798f7f-5n8xl] STEP: Considering event: Type = [Normal], Name = [without-toleration.16a1569c5fdcfd8b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-toleration.16a1569c856644c4], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16a1569c96427bc7], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16a1569cb05849ef], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16a1569cd2519e67], Reason = [SandboxChanged], Message = [Pod sandbox changed, it will be killed and re-created.] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-33e15868-c609-4e7e-a2f3-4140bbddc702=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16a1569d39250fb9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4231/still-no-tolerations to capi-kali-md-0-76b6798f7f-5n8xl] STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16a1569d65c645c3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: removing the label kubernetes.io/e2e-label-key-3f7c7a33-3959-4ce1-950c-eda2749d56e2 off the node capi-kali-md-0-76b6798f7f-5n8xl STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-3f7c7a33-3959-4ce1-950c-eda2749d56e2 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-33e15868-c609-4e7e-a2f3-4140bbddc702=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:36:14.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4231" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:5.403 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":12,"completed":2,"skipped":324,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:36:14.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 Sep 3 14:36:14.213: INFO: Waiting up to 1m0s for all nodes to be ready Sep 3 14:37:14.244: INFO: Waiting for terminating namespaces to be deleted... Sep 3 14:37:14.247: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Sep 3 14:37:14.260: INFO: 18 / 18 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Sep 3 14:37:14.260: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 Sep 3 14:37:14.272: INFO: ComputeCPUMemFraction for node: capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:37:14.272: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:37:14.272: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:37:14.273: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:37:14.273: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:37:14.273: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:37:14.273: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:37:14.273: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:37:14.273: INFO: Node: capi-kali-md-0-76b6798f7f-5n8xl, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Sep 3 14:37:14.273: INFO: Node: capi-kali-md-0-76b6798f7f-5n8xl, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 Sep 3 14:37:14.273: INFO: ComputeCPUMemFraction for node: capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:37:14.273: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:37:14.273: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:37:14.273: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:37:14.273: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:37:14.273: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:37:14.273: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:37:14.273: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:37:14.273: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:37:14.273: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:37:14.273: INFO: Node: capi-kali-md-0-76b6798f7f-7jvhm, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Sep 3 14:37:14.273: INFO: Node: capi-kali-md-0-76b6798f7f-7jvhm, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 Sep 3 14:37:14.284: INFO: Waiting for running... Sep 3 14:37:19.340: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Sep 3 14:37:24.401: INFO: ComputeCPUMemFraction for node: capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:37:24.401: INFO: Pod for on the node: 99f7f876-585e-42f9-8909-93746110a1a2-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:37:24.401: INFO: Pod for on the node: 99f7f876-585e-42f9-8909-93746110a1a2-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:37:24.401: INFO: Pod for on the node: 99f7f876-585e-42f9-8909-93746110a1a2-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:37:24.401: INFO: Pod for on the node: 99f7f876-585e-42f9-8909-93746110a1a2-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:37:24.401: INFO: Pod for on the node: 99f7f876-585e-42f9-8909-93746110a1a2-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:37:24.401: INFO: Pod for on the node: 99f7f876-585e-42f9-8909-93746110a1a2-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:37:24.401: INFO: Pod for on the node: 99f7f876-585e-42f9-8909-93746110a1a2-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:37:24.401: INFO: Pod for on the node: 99f7f876-585e-42f9-8909-93746110a1a2-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:37:24.401: INFO: Node: capi-kali-md-0-76b6798f7f-5n8xl, totalRequestedCPUResource: 351300, cpuAllocatableMil: 88000, cpuFraction: 1 Sep 3 14:37:24.401: INFO: Node: capi-kali-md-0-76b6798f7f-5n8xl, totalRequestedMemResource: 268986875904, memAllocatableVal: 67430219776, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Sep 3 14:37:24.401: INFO: ComputeCPUMemFraction for node: capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:37:24.401: INFO: Pod for on the node: 99f7f876-585e-42f9-8909-93746110a1a2-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:37:24.401: INFO: Pod for on the node: 99f7f876-585e-42f9-8909-93746110a1a2-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:37:24.401: INFO: Pod for on the node: 99f7f876-585e-42f9-8909-93746110a1a2-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:37:24.401: INFO: Pod for on the node: 99f7f876-585e-42f9-8909-93746110a1a2-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:37:24.401: INFO: Pod for on the node: 99f7f876-585e-42f9-8909-93746110a1a2-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:37:24.401: INFO: Pod for on the node: 99f7f876-585e-42f9-8909-93746110a1a2-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:37:24.401: INFO: Pod for on the node: 99f7f876-585e-42f9-8909-93746110a1a2-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:37:24.401: INFO: Pod for on the node: 99f7f876-585e-42f9-8909-93746110a1a2-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:37:24.401: INFO: Pod for on the node: 99f7f876-585e-42f9-8909-93746110a1a2-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:37:24.401: INFO: Pod for on the node: 99f7f876-585e-42f9-8909-93746110a1a2-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:37:24.401: INFO: Node: capi-kali-md-0-76b6798f7f-7jvhm, totalRequestedCPUResource: 439100, cpuAllocatableMil: 88000, cpuFraction: 1 Sep 3 14:37:24.401: INFO: Node: capi-kali-md-0-76b6798f7f-7jvhm, totalRequestedMemResource: 336207380480, memAllocatableVal: 67430219776, memFraction: 1 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-f97d29ff-ca15-4792-bca3-c4951298f173=testing-taint-value-6e61410a-0a4c-4623-bc1a-d85cc64e08fe:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-bb74d168-42ef-4baf-97b8-ac0aba6a6359=testing-taint-value-9269b4e2-d6c5-4496-97be-a498fd6952ec:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-88541d4e-72f0-4d06-99c5-b515b37390b5=testing-taint-value-9c8e2d89-f908-4169-88ad-5718d70d9b23:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-bdffbbcf-aa80-47b8-b97c-3ae1615ada8f=testing-taint-value-f62bcdc2-c269-4073-9ec9-0004b70e4ecd:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-2993bde0-effe-40fe-ba1b-b15d31aeecda=testing-taint-value-b08cb34c-69a1-4f65-936f-d9f460f55689:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-66ca11ae-caf5-4ad9-826a-032795642894=testing-taint-value-26f829ee-5bde-4bd0-a6b9-049a0d40d724:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-f0b67dff-23e8-4b3c-943e-c9804c0cde50=testing-taint-value-e42a97ad-212e-4c24-a1a2-a40a0ba9e5ef:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-99a9afaa-e743-40ae-aa21-2f44562e87c4=testing-taint-value-f330a214-0f41-4e57-8f8b-f12b9e7f85ac:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-a547b85b-fd1a-49ce-b23b-e084a2759e6c=testing-taint-value-7f1aaf8b-0003-4fba-b3fe-0ef7add42a6a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-d445e161-538b-432f-8115-1d057e02661e=testing-taint-value-60d9bca1-a867-4919-a5b9-c5a91d4e1dfd:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-5819f9be-9c37-4da4-8538-dd338f6f78d7=testing-taint-value-40a170b6-2bc5-497a-82af-8ec91ad7d5bc:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-5035a3d4-550e-4284-a8e2-b0dd2978bdaf=testing-taint-value-e45c27f8-bbdc-40ad-a4d3-409910c76b43:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-1852bf94-68dc-4520-8d79-65a80ef6888b=testing-taint-value-7d96429c-bf53-47a8-adce-9c09ecac8e92:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-dde810db-e32d-4eb2-ba4a-9688482d5403=testing-taint-value-85492d25-3fb3-466f-beb8-0b418e33573c:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-ffbf0faf-e0a5-4370-bd5d-2efcfb170e59=testing-taint-value-f8ab6f4d-a29d-469c-9321-ccb85e06fc9e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-7438ba02-26ee-440b-9303-980727345eb4=testing-taint-value-8d4004cb-c4b7-4058-9d3a-1311bf46a8e6:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-de8ed5fe-cf77-4a26-9bbd-e011c6e926e6=testing-taint-value-bc85eeef-4d99-46a6-9508-f8dded7c5042:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-0691f8f7-7d1b-4d9c-b7ed-f61764d3ab45=testing-taint-value-78faa880-8453-41c9-a67d-5f49b9664335:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-c690abc8-0b08-4dda-84e8-d0a4d1cb4d23=testing-taint-value-80c02c04-57eb-405e-afdf-c28bb7c7ec58:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-8972f4c0-5111-4faf-8c74-55e77dd23a60=testing-taint-value-d61c1419-fef7-4c2f-a473-84bc387ceca6:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-8972f4c0-5111-4faf-8c74-55e77dd23a60=testing-taint-value-d61c1419-fef7-4c2f-a473-84bc387ceca6:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-c690abc8-0b08-4dda-84e8-d0a4d1cb4d23=testing-taint-value-80c02c04-57eb-405e-afdf-c28bb7c7ec58:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-0691f8f7-7d1b-4d9c-b7ed-f61764d3ab45=testing-taint-value-78faa880-8453-41c9-a67d-5f49b9664335:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-de8ed5fe-cf77-4a26-9bbd-e011c6e926e6=testing-taint-value-bc85eeef-4d99-46a6-9508-f8dded7c5042:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-7438ba02-26ee-440b-9303-980727345eb4=testing-taint-value-8d4004cb-c4b7-4058-9d3a-1311bf46a8e6:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-ffbf0faf-e0a5-4370-bd5d-2efcfb170e59=testing-taint-value-f8ab6f4d-a29d-469c-9321-ccb85e06fc9e:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-dde810db-e32d-4eb2-ba4a-9688482d5403=testing-taint-value-85492d25-3fb3-466f-beb8-0b418e33573c:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-1852bf94-68dc-4520-8d79-65a80ef6888b=testing-taint-value-7d96429c-bf53-47a8-adce-9c09ecac8e92:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-5035a3d4-550e-4284-a8e2-b0dd2978bdaf=testing-taint-value-e45c27f8-bbdc-40ad-a4d3-409910c76b43:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-5819f9be-9c37-4da4-8538-dd338f6f78d7=testing-taint-value-40a170b6-2bc5-497a-82af-8ec91ad7d5bc:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-d445e161-538b-432f-8115-1d057e02661e=testing-taint-value-60d9bca1-a867-4919-a5b9-c5a91d4e1dfd:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-a547b85b-fd1a-49ce-b23b-e084a2759e6c=testing-taint-value-7f1aaf8b-0003-4fba-b3fe-0ef7add42a6a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-99a9afaa-e743-40ae-aa21-2f44562e87c4=testing-taint-value-f330a214-0f41-4e57-8f8b-f12b9e7f85ac:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-f0b67dff-23e8-4b3c-943e-c9804c0cde50=testing-taint-value-e42a97ad-212e-4c24-a1a2-a40a0ba9e5ef:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-66ca11ae-caf5-4ad9-826a-032795642894=testing-taint-value-26f829ee-5bde-4bd0-a6b9-049a0d40d724:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-2993bde0-effe-40fe-ba1b-b15d31aeecda=testing-taint-value-b08cb34c-69a1-4f65-936f-d9f460f55689:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-bdffbbcf-aa80-47b8-b97c-3ae1615ada8f=testing-taint-value-f62bcdc2-c269-4073-9ec9-0004b70e4ecd:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-88541d4e-72f0-4d06-99c5-b515b37390b5=testing-taint-value-9c8e2d89-f908-4169-88ad-5718d70d9b23:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-bb74d168-42ef-4baf-97b8-ac0aba6a6359=testing-taint-value-9269b4e2-d6c5-4496-97be-a498fd6952ec:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-f97d29ff-ca15-4792-bca3-c4951298f173=testing-taint-value-6e61410a-0a4c-4623-bc1a-d85cc64e08fe:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:37:30.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-2908" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:76.074 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":12,"completed":3,"skipped":1156,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:37:30.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 3 14:37:30.307: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 3 14:37:30.316: INFO: Waiting for terminating namespaces to be deleted... Sep 3 14:37:30.319: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-5n8xl before test Sep 3 14:37:30.326: INFO: chaos-daemon-tzn7z from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:37:30.326: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:37:30.326: INFO: coredns-f9fd979d6-qdhsv from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:37:30.326: INFO: Container coredns ready: true, restart count 0 Sep 3 14:37:30.326: INFO: create-loop-devs-qjl7t from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:37:30.326: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:37:30.326: INFO: kindnet-55d6f from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:37:30.326: INFO: Container kindnet-cni ready: true, restart count 12 Sep 3 14:37:30.326: INFO: kube-proxy-lqr9t from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:37:30.326: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:37:30.326: INFO: tune-sysctls-wz9ls from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:37:30.326: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:37:30.326: INFO: chaos-operator-ce-5754fd4b69-crx4p from litmus started at 2021-08-31 13:03:04 +0000 UTC (1 container statuses recorded) Sep 3 14:37:30.326: INFO: Container chaos-operator ready: true, restart count 0 Sep 3 14:37:30.326: INFO: with-tolerations from sched-priority-2908 started at 2021-09-03 14:37:25 +0000 UTC (1 container statuses recorded) Sep 3 14:37:30.326: INFO: Container with-tolerations ready: true, restart count 0 Sep 3 14:37:30.326: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-7jvhm before test Sep 3 14:37:30.333: INFO: chaos-controller-manager-69c479c674-2scf8 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:37:30.333: INFO: Container chaos-mesh ready: true, restart count 0 Sep 3 14:37:30.333: INFO: chaos-daemon-6lv64 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:37:30.333: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:37:30.333: INFO: dockerd from default started at 2021-08-31 13:02:43 +0000 UTC (1 container statuses recorded) Sep 3 14:37:30.333: INFO: Container dockerd ready: true, restart count 0 Sep 3 14:37:30.333: INFO: coredns-f9fd979d6-45cv5 from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:37:30.333: INFO: Container coredns ready: true, restart count 0 Sep 3 14:37:30.333: INFO: create-loop-devs-4jkpj from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:37:30.333: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:37:30.333: INFO: kindnet-7cmgn from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:37:30.333: INFO: Container kindnet-cni ready: true, restart count 16 Sep 3 14:37:30.333: INFO: kube-proxy-h8v9x from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:37:30.333: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:37:30.333: INFO: tune-sysctls-mv2h6 from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:37:30.333: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:37:30.333: INFO: local-path-provisioner-556d4466c8-khwq6 from local-path-storage started at 2021-08-30 14:58:21 +0000 UTC (1 container statuses recorded) Sep 3 14:37:30.333: INFO: Container local-path-provisioner ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-41a63a64-e16c-4bba-bb1d-101b4be22c36.16a156afa02e88fc], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Normal], Name = [filler-pod-41a63a64-e16c-4bba-bb1d-101b4be22c36.16a156b1a2a2f09c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4240/filler-pod-41a63a64-e16c-4bba-bb1d-101b4be22c36 to capi-kali-md-0-76b6798f7f-7jvhm] STEP: Considering event: Type = [Normal], Name = [filler-pod-41a63a64-e16c-4bba-bb1d-101b4be22c36.16a156b1c4da6fcd], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-41a63a64-e16c-4bba-bb1d-101b4be22c36.16a156b1c8055aa8], Reason = [Created], Message = [Created container filler-pod-41a63a64-e16c-4bba-bb1d-101b4be22c36] STEP: Considering event: Type = [Normal], Name = [filler-pod-41a63a64-e16c-4bba-bb1d-101b4be22c36.16a156b1cfea24da], Reason = [Started], Message = [Started container filler-pod-41a63a64-e16c-4bba-bb1d-101b4be22c36] STEP: Considering event: Type = [Normal], Name = [without-label.16a156af26bd89d7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4240/without-label to capi-kali-md-0-76b6798f7f-7jvhm] STEP: Considering event: Type = [Normal], Name = [without-label.16a156af4e933117], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-label.16a156af509f9133], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16a156af5a588cfe], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16a156af9efd4123], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16a156afbd609bf7], Reason = [SandboxChanged], Message = [Pod sandbox changed, it will be killed and re-created.] STEP: Considering event: Type = [Warning], Name = [additional-pode2cab9d6-16c4-4bfb-930b-90d17d3f890e.16a156b1f58faba1], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 Insufficient example.com/beardsecond.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:37:43.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4240" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:13.165 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":12,"completed":4,"skipped":2102,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:37:43.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 Sep 3 14:37:43.475: INFO: Waiting up to 1m0s for all nodes to be ready Sep 3 14:38:43.506: INFO: Waiting for terminating namespaces to be deleted... Sep 3 14:38:43.510: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Sep 3 14:38:43.525: INFO: 18 / 18 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Sep 3 14:38:43.525: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname Sep 3 14:38:45.560: INFO: ComputeCPUMemFraction for node: capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:38:45.560: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:45.560: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:45.560: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:45.560: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:45.560: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:45.560: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:45.560: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:45.560: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:45.560: INFO: Node: capi-kali-md-0-76b6798f7f-5n8xl, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Sep 3 14:38:45.560: INFO: Node: capi-kali-md-0-76b6798f7f-5n8xl, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 Sep 3 14:38:45.560: INFO: ComputeCPUMemFraction for node: capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:38:45.560: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:45.560: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:45.560: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:45.560: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:45.560: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:45.560: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:45.560: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:45.560: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:45.560: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:45.560: INFO: Node: capi-kali-md-0-76b6798f7f-7jvhm, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Sep 3 14:38:45.560: INFO: Node: capi-kali-md-0-76b6798f7f-7jvhm, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 Sep 3 14:38:45.566: INFO: Waiting for running... Sep 3 14:38:50.623: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Sep 3 14:38:55.683: INFO: ComputeCPUMemFraction for node: capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:38:55.683: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:55.684: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:55.684: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:55.684: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:55.684: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:55.684: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:55.684: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:55.684: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:55.684: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:55.684: INFO: Node: capi-kali-md-0-76b6798f7f-5n8xl, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Sep 3 14:38:55.684: INFO: Node: capi-kali-md-0-76b6798f7f-5n8xl, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 STEP: Compute Cpu, Mem Fraction after create balanced pods. Sep 3 14:38:55.684: INFO: ComputeCPUMemFraction for node: capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:38:55.684: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:55.684: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:55.684: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:55.684: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:55.684: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:55.684: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:55.684: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:55.684: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:55.684: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:55.684: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Sep 3 14:38:55.684: INFO: Node: capi-kali-md-0-76b6798f7f-7jvhm, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Sep 3 14:38:55.684: INFO: Node: capi-kali-md-0-76b6798f7f-7jvhm, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. Sep 3 14:39:59.729: INFO: Failed to wait until all memory balanced pods are deleted: timed out waiting for the condition. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:39:59.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-1495" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:136.292 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":12,"completed":5,"skipped":2573,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:39:59.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 Sep 3 14:39:59.774: INFO: Waiting up to 1m0s for all nodes to be ready Sep 3 14:40:59.804: INFO: Waiting for terminating namespaces to be deleted... Sep 3 14:40:59.808: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Sep 3 14:40:59.822: INFO: 18 / 18 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Sep 3 14:40:59.822: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:350 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 Sep 3 14:41:04.176: INFO: ComputeCPUMemFraction for node: capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:41:04.176: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:41:04.176: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:41:04.176: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:41:04.176: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:41:04.176: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:41:04.176: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:41:04.176: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:41:04.176: INFO: Node: capi-kali-md-0-76b6798f7f-5n8xl, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Sep 3 14:41:04.176: INFO: Node: capi-kali-md-0-76b6798f7f-5n8xl, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 Sep 3 14:41:04.176: INFO: ComputeCPUMemFraction for node: capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:41:04.176: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:41:04.176: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:41:04.176: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:41:04.176: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:41:04.176: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:41:04.176: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:41:04.176: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:41:04.176: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:41:04.176: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:41:04.176: INFO: Node: capi-kali-md-0-76b6798f7f-7jvhm, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Sep 3 14:41:04.176: INFO: Node: capi-kali-md-0-76b6798f7f-7jvhm, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 Sep 3 14:41:04.181: INFO: Waiting for running... Sep 3 14:41:09.239: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Sep 3 14:41:14.319: INFO: ComputeCPUMemFraction for node: capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:41:14.319: INFO: Pod for on the node: bf615750-c9f4-46bf-be6e-f52441613011-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:41:14.320: INFO: Pod for on the node: bf615750-c9f4-46bf-be6e-f52441613011-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:41:14.320: INFO: Pod for on the node: bf615750-c9f4-46bf-be6e-f52441613011-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:41:14.320: INFO: Pod for on the node: bf615750-c9f4-46bf-be6e-f52441613011-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:41:14.320: INFO: Pod for on the node: bf615750-c9f4-46bf-be6e-f52441613011-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:41:14.320: INFO: Pod for on the node: bf615750-c9f4-46bf-be6e-f52441613011-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:41:14.320: INFO: Pod for on the node: bf615750-c9f4-46bf-be6e-f52441613011-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:41:14.320: INFO: Pod for on the node: bf615750-c9f4-46bf-be6e-f52441613011-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:41:14.321: INFO: Node: capi-kali-md-0-76b6798f7f-5n8xl, totalRequestedCPUResource: 351300, cpuAllocatableMil: 88000, cpuFraction: 1 Sep 3 14:41:14.321: INFO: Node: capi-kali-md-0-76b6798f7f-5n8xl, totalRequestedMemResource: 268986875904, memAllocatableVal: 67430219776, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Sep 3 14:41:14.321: INFO: ComputeCPUMemFraction for node: capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:41:14.321: INFO: Pod for on the node: bf615750-c9f4-46bf-be6e-f52441613011-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:41:14.321: INFO: Pod for on the node: bf615750-c9f4-46bf-be6e-f52441613011-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:41:14.321: INFO: Pod for on the node: bf615750-c9f4-46bf-be6e-f52441613011-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:41:14.322: INFO: Pod for on the node: bf615750-c9f4-46bf-be6e-f52441613011-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:41:14.322: INFO: Pod for on the node: bf615750-c9f4-46bf-be6e-f52441613011-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:41:14.322: INFO: Pod for on the node: bf615750-c9f4-46bf-be6e-f52441613011-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:41:14.322: INFO: Pod for on the node: bf615750-c9f4-46bf-be6e-f52441613011-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:41:14.322: INFO: Pod for on the node: bf615750-c9f4-46bf-be6e-f52441613011-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:41:14.322: INFO: Pod for on the node: bf615750-c9f4-46bf-be6e-f52441613011-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:41:14.322: INFO: Pod for on the node: bf615750-c9f4-46bf-be6e-f52441613011-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:41:14.323: INFO: Node: capi-kali-md-0-76b6798f7f-7jvhm, totalRequestedCPUResource: 439100, cpuAllocatableMil: 88000, cpuFraction: 1 Sep 3 14:41:14.323: INFO: Node: capi-kali-md-0-76b6798f7f-7jvhm, totalRequestedMemResource: 336207380480, memAllocatableVal: 67430219776, memFraction: 1 STEP: Run a ReplicaSet with 4 replicas on node "capi-kali-md-0-76b6798f7f-5n8xl" STEP: Verifying if the test-pod lands on node "capi-kali-md-0-76b6798f7f-7jvhm" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:358 STEP: removing the label kubernetes.io/e2e-pts-score off the node capi-kali-md-0-76b6798f7f-5n8xl STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node capi-kali-md-0-76b6798f7f-7jvhm STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:41:22.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-5094" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:82.732 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:346 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":12,"completed":6,"skipped":2839,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:41:22.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 3 14:41:22.509: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 3 14:41:22.520: INFO: Waiting for terminating namespaces to be deleted... Sep 3 14:41:22.523: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-5n8xl before test Sep 3 14:41:22.531: INFO: chaos-daemon-tzn7z from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:41:22.531: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:41:22.531: INFO: coredns-f9fd979d6-qdhsv from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:41:22.531: INFO: Container coredns ready: true, restart count 0 Sep 3 14:41:22.531: INFO: create-loop-devs-qjl7t from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:41:22.531: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:41:22.531: INFO: kindnet-55d6f from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:41:22.531: INFO: Container kindnet-cni ready: true, restart count 12 Sep 3 14:41:22.531: INFO: kube-proxy-lqr9t from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:41:22.531: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:41:22.531: INFO: tune-sysctls-wz9ls from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:41:22.531: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:41:22.531: INFO: chaos-operator-ce-5754fd4b69-crx4p from litmus started at 2021-08-31 13:03:04 +0000 UTC (1 container statuses recorded) Sep 3 14:41:22.531: INFO: Container chaos-operator ready: true, restart count 0 Sep 3 14:41:22.531: INFO: rs-e2e-pts-score-bvlf2 from sched-priority-5094 started at 2021-09-03 14:41:14 +0000 UTC (1 container statuses recorded) Sep 3 14:41:22.531: INFO: Container e2e-pts-score ready: true, restart count 0 Sep 3 14:41:22.531: INFO: rs-e2e-pts-score-bvxx4 from sched-priority-5094 started at 2021-09-03 14:41:14 +0000 UTC (1 container statuses recorded) Sep 3 14:41:22.531: INFO: Container e2e-pts-score ready: true, restart count 0 Sep 3 14:41:22.531: INFO: rs-e2e-pts-score-dcb6m from sched-priority-5094 started at 2021-09-03 14:41:14 +0000 UTC (1 container statuses recorded) Sep 3 14:41:22.531: INFO: Container e2e-pts-score ready: true, restart count 0 Sep 3 14:41:22.531: INFO: rs-e2e-pts-score-n8h8q from sched-priority-5094 started at 2021-09-03 14:41:14 +0000 UTC (1 container statuses recorded) Sep 3 14:41:22.531: INFO: Container e2e-pts-score ready: true, restart count 0 Sep 3 14:41:22.531: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-7jvhm before test Sep 3 14:41:22.539: INFO: chaos-controller-manager-69c479c674-2scf8 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:41:22.539: INFO: Container chaos-mesh ready: true, restart count 0 Sep 3 14:41:22.539: INFO: chaos-daemon-6lv64 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:41:22.539: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:41:22.539: INFO: dockerd from default started at 2021-08-31 13:02:43 +0000 UTC (1 container statuses recorded) Sep 3 14:41:22.539: INFO: Container dockerd ready: true, restart count 0 Sep 3 14:41:22.539: INFO: coredns-f9fd979d6-45cv5 from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:41:22.539: INFO: Container coredns ready: true, restart count 0 Sep 3 14:41:22.539: INFO: create-loop-devs-4jkpj from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:41:22.539: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:41:22.539: INFO: kindnet-7cmgn from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:41:22.539: INFO: Container kindnet-cni ready: true, restart count 16 Sep 3 14:41:22.539: INFO: kube-proxy-h8v9x from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:41:22.539: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:41:22.539: INFO: tune-sysctls-mv2h6 from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:41:22.539: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:41:22.539: INFO: local-path-provisioner-556d4466c8-khwq6 from local-path-storage started at 2021-08-30 14:58:21 +0000 UTC (1 container statuses recorded) Sep 3 14:41:22.539: INFO: Container local-path-provisioner ready: true, restart count 0 Sep 3 14:41:22.539: INFO: test-pod from sched-priority-5094 started at 2021-09-03 14:41:16 +0000 UTC (1 container statuses recorded) Sep 3 14:41:22.539: INFO: Container test-pod ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-5c37ee32-8dfb-46eb-b15f-a9c1afe54825=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-7e58369f-4c86-4841-bde2-5bfd097a78af testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-7e58369f-4c86-4841-bde2-5bfd097a78af off the node capi-kali-md-0-76b6798f7f-5n8xl STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-7e58369f-4c86-4841-bde2-5bfd097a78af STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-5c37ee32-8dfb-46eb-b15f-a9c1afe54825=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:41:26.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6811" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":12,"completed":7,"skipped":3014,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:41:26.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 3 14:41:26.697: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 3 14:41:26.706: INFO: Waiting for terminating namespaces to be deleted... Sep 3 14:41:26.709: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-5n8xl before test Sep 3 14:41:26.717: INFO: chaos-daemon-tzn7z from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.717: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:41:26.717: INFO: coredns-f9fd979d6-qdhsv from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.717: INFO: Container coredns ready: true, restart count 0 Sep 3 14:41:26.717: INFO: create-loop-devs-qjl7t from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.717: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:41:26.717: INFO: kindnet-55d6f from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.717: INFO: Container kindnet-cni ready: true, restart count 12 Sep 3 14:41:26.717: INFO: kube-proxy-lqr9t from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.717: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:41:26.717: INFO: tune-sysctls-wz9ls from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.717: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:41:26.717: INFO: chaos-operator-ce-5754fd4b69-crx4p from litmus started at 2021-08-31 13:03:04 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.717: INFO: Container chaos-operator ready: true, restart count 0 Sep 3 14:41:26.717: INFO: with-tolerations from sched-pred-6811 started at 2021-09-03 14:41:24 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.717: INFO: Container with-tolerations ready: true, restart count 0 Sep 3 14:41:26.717: INFO: rs-e2e-pts-score-bvlf2 from sched-priority-5094 started at 2021-09-03 14:41:14 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.717: INFO: Container e2e-pts-score ready: true, restart count 0 Sep 3 14:41:26.717: INFO: rs-e2e-pts-score-bvxx4 from sched-priority-5094 started at 2021-09-03 14:41:14 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.717: INFO: Container e2e-pts-score ready: true, restart count 0 Sep 3 14:41:26.717: INFO: rs-e2e-pts-score-dcb6m from sched-priority-5094 started at 2021-09-03 14:41:14 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.717: INFO: Container e2e-pts-score ready: true, restart count 0 Sep 3 14:41:26.717: INFO: rs-e2e-pts-score-n8h8q from sched-priority-5094 started at 2021-09-03 14:41:14 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.717: INFO: Container e2e-pts-score ready: true, restart count 0 Sep 3 14:41:26.717: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-7jvhm before test Sep 3 14:41:26.726: INFO: chaos-controller-manager-69c479c674-2scf8 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.726: INFO: Container chaos-mesh ready: true, restart count 0 Sep 3 14:41:26.726: INFO: chaos-daemon-6lv64 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.726: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:41:26.726: INFO: dockerd from default started at 2021-08-31 13:02:43 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.726: INFO: Container dockerd ready: true, restart count 0 Sep 3 14:41:26.726: INFO: coredns-f9fd979d6-45cv5 from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.726: INFO: Container coredns ready: true, restart count 0 Sep 3 14:41:26.726: INFO: create-loop-devs-4jkpj from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.726: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:41:26.726: INFO: kindnet-7cmgn from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.726: INFO: Container kindnet-cni ready: true, restart count 16 Sep 3 14:41:26.726: INFO: kube-proxy-h8v9x from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.726: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:41:26.726: INFO: tune-sysctls-mv2h6 from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.726: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:41:26.726: INFO: local-path-provisioner-556d4466c8-khwq6 from local-path-storage started at 2021-08-30 14:58:21 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.726: INFO: Container local-path-provisioner ready: true, restart count 0 Sep 3 14:41:26.726: INFO: test-pod from sched-priority-5094 started at 2021-09-03 14:41:16 +0000 UTC (1 container statuses recorded) Sep 3 14:41:26.726: INFO: Container test-pod ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-da904c0f-972f-4356-aeed-32a9355e826c 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-da904c0f-972f-4356-aeed-32a9355e826c off the node capi-kali-md-0-76b6798f7f-5n8xl STEP: verifying the node doesn't have the label kubernetes.io/e2e-da904c0f-972f-4356-aeed-32a9355e826c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:41:32.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9054" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:6.139 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":12,"completed":8,"skipped":3314,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:41:32.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 3 14:41:32.839: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 3 14:41:32.849: INFO: Waiting for terminating namespaces to be deleted... Sep 3 14:41:32.852: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-5n8xl before test Sep 3 14:41:32.860: INFO: chaos-daemon-tzn7z from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:41:32.860: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:41:32.860: INFO: coredns-f9fd979d6-qdhsv from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:41:32.860: INFO: Container coredns ready: true, restart count 0 Sep 3 14:41:32.860: INFO: create-loop-devs-qjl7t from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:41:32.860: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:41:32.860: INFO: kindnet-55d6f from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:41:32.860: INFO: Container kindnet-cni ready: true, restart count 12 Sep 3 14:41:32.860: INFO: kube-proxy-lqr9t from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:41:32.860: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:41:32.860: INFO: tune-sysctls-wz9ls from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:41:32.860: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:41:32.860: INFO: chaos-operator-ce-5754fd4b69-crx4p from litmus started at 2021-08-31 13:03:04 +0000 UTC (1 container statuses recorded) Sep 3 14:41:32.860: INFO: Container chaos-operator ready: true, restart count 0 Sep 3 14:41:32.860: INFO: with-tolerations from sched-pred-6811 started at 2021-09-03 14:41:24 +0000 UTC (1 container statuses recorded) Sep 3 14:41:32.860: INFO: Container with-tolerations ready: false, restart count 0 Sep 3 14:41:32.860: INFO: with-labels from sched-pred-9054 started at 2021-09-03 14:41:28 +0000 UTC (1 container statuses recorded) Sep 3 14:41:32.860: INFO: Container with-labels ready: true, restart count 0 Sep 3 14:41:32.860: INFO: rs-e2e-pts-score-bvxx4 from sched-priority-5094 started at 2021-09-03 14:41:14 +0000 UTC (1 container statuses recorded) Sep 3 14:41:32.860: INFO: Container e2e-pts-score ready: false, restart count 0 Sep 3 14:41:32.860: INFO: rs-e2e-pts-score-n8h8q from sched-priority-5094 started at 2021-09-03 14:41:14 +0000 UTC (1 container statuses recorded) Sep 3 14:41:32.860: INFO: Container e2e-pts-score ready: false, restart count 0 Sep 3 14:41:32.860: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-7jvhm before test Sep 3 14:41:32.867: INFO: chaos-controller-manager-69c479c674-2scf8 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:41:32.867: INFO: Container chaos-mesh ready: true, restart count 0 Sep 3 14:41:32.867: INFO: chaos-daemon-6lv64 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:41:32.867: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:41:32.867: INFO: dockerd from default started at 2021-08-31 13:02:43 +0000 UTC (1 container statuses recorded) Sep 3 14:41:32.867: INFO: Container dockerd ready: true, restart count 0 Sep 3 14:41:32.867: INFO: coredns-f9fd979d6-45cv5 from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:41:32.867: INFO: Container coredns ready: true, restart count 0 Sep 3 14:41:32.867: INFO: create-loop-devs-4jkpj from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:41:32.867: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:41:32.867: INFO: kindnet-7cmgn from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:41:32.867: INFO: Container kindnet-cni ready: true, restart count 16 Sep 3 14:41:32.867: INFO: kube-proxy-h8v9x from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:41:32.867: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:41:32.867: INFO: tune-sysctls-mv2h6 from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:41:32.867: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:41:32.867: INFO: local-path-provisioner-556d4466c8-khwq6 from local-path-storage started at 2021-08-30 14:58:21 +0000 UTC (1 container statuses recorded) Sep 3 14:41:32.867: INFO: Container local-path-provisioner ready: true, restart count 0 Sep 3 14:41:32.867: INFO: test-pod from sched-priority-5094 started at 2021-09-03 14:41:16 +0000 UTC (1 container statuses recorded) Sep 3 14:41:32.867: INFO: Container test-pod ready: false, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 Sep 3 14:41:32.885: INFO: Pod chaos-controller-manager-69c479c674-2scf8 requesting local ephemeral resource =0 on Node capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:41:32.885: INFO: Pod chaos-daemon-6lv64 requesting local ephemeral resource =0 on Node capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:41:32.885: INFO: Pod chaos-daemon-tzn7z requesting local ephemeral resource =0 on Node capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:41:32.885: INFO: Pod dockerd requesting local ephemeral resource =0 on Node capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:41:32.885: INFO: Pod coredns-f9fd979d6-45cv5 requesting local ephemeral resource =0 on Node capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:41:32.885: INFO: Pod coredns-f9fd979d6-qdhsv requesting local ephemeral resource =0 on Node capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:41:32.885: INFO: Pod create-loop-devs-4jkpj requesting local ephemeral resource =0 on Node capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:41:32.885: INFO: Pod create-loop-devs-qjl7t requesting local ephemeral resource =0 on Node capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:41:32.885: INFO: Pod kindnet-55d6f requesting local ephemeral resource =0 on Node capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:41:32.885: INFO: Pod kindnet-7cmgn requesting local ephemeral resource =0 on Node capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:41:32.885: INFO: Pod kube-proxy-h8v9x requesting local ephemeral resource =0 on Node capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:41:32.885: INFO: Pod kube-proxy-lqr9t requesting local ephemeral resource =0 on Node capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:41:32.885: INFO: Pod tune-sysctls-mv2h6 requesting local ephemeral resource =0 on Node capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:41:32.885: INFO: Pod tune-sysctls-wz9ls requesting local ephemeral resource =0 on Node capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:41:32.885: INFO: Pod chaos-operator-ce-5754fd4b69-crx4p requesting local ephemeral resource =0 on Node capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:41:32.885: INFO: Pod local-path-provisioner-556d4466c8-khwq6 requesting local ephemeral resource =0 on Node capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:41:32.885: INFO: Pod with-tolerations requesting local ephemeral resource =0 on Node capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:41:32.885: INFO: Pod with-labels requesting local ephemeral resource =0 on Node capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:41:32.885: INFO: Pod rs-e2e-pts-score-bvxx4 requesting local ephemeral resource =0 on Node capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:41:32.885: INFO: Pod rs-e2e-pts-score-n8h8q requesting local ephemeral resource =0 on Node capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:41:32.885: INFO: Pod test-pod requesting local ephemeral resource =0 on Node capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:41:32.885: INFO: Using pod capacity: 47063248896 Sep 3 14:41:32.885: INFO: Node: capi-kali-md-0-76b6798f7f-5n8xl has local ephemeral resource allocatable: 470632488960 Sep 3 14:41:32.885: INFO: Node: capi-kali-md-0-76b6798f7f-7jvhm has local ephemeral resource allocatable: 470632488960 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one Sep 3 14:41:32.967: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.16a156e79f621e10], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8184/overcommit-0 to capi-kali-md-0-76b6798f7f-7jvhm] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16a156e80ea09800], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16a156e8129b2b8b], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16a156e8252fd0c1], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16a156e79f927e02], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8184/overcommit-1 to capi-kali-md-0-76b6798f7f-5n8xl] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16a156e80dc058fc], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16a156e811115088], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16a156e8252fd9cd], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16a156e7a2322623], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8184/overcommit-10 to capi-kali-md-0-76b6798f7f-5n8xl] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16a156e80e26c58d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16a156e8119e0957], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16a156e821e924e9], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16a156e7a2631830], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8184/overcommit-11 to capi-kali-md-0-76b6798f7f-7jvhm] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16a156e80e33409d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16a156e81272b3d3], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16a156e824f0dcdd], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16a156e7a28122c1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8184/overcommit-12 to capi-kali-md-0-76b6798f7f-7jvhm] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16a156e80ea083b1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16a156e812bb0b06], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16a156e8252c06a3], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16a156e7a2db2774], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8184/overcommit-13 to capi-kali-md-0-76b6798f7f-7jvhm] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16a156e80f1dd900], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16a156e81375830d], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16a156e8258a0903], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16a156e7a2fa6481], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8184/overcommit-14 to capi-kali-md-0-76b6798f7f-5n8xl] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16a156e80ebcdf2b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16a156e8128ebd9e], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16a156e8254448e0], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16a156e7a335b9b6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8184/overcommit-15 to capi-kali-md-0-76b6798f7f-7jvhm] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16a156e80e09ba34], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16a156e811f5c61c], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16a156e8251454a1], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16a156e7a357e89b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8184/overcommit-16 to capi-kali-md-0-76b6798f7f-5n8xl] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16a156e80df3eaa0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16a156e81161e779], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16a156e825202773], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16a156e7a37859b4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8184/overcommit-17 to capi-kali-md-0-76b6798f7f-5n8xl] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16a156e80d9e7e92], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16a156e810bb18f0], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16a156e824f11ee3], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16a156e7a39ab5a8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8184/overcommit-18 to capi-kali-md-0-76b6798f7f-5n8xl] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16a156e80dd26f1a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16a156e810fb998d], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16a156e82185f34f], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16a156e7a3c338b5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8184/overcommit-19 to capi-kali-md-0-76b6798f7f-7jvhm] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16a156e80eae6c44], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16a156e812b5a063], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16a156e8253014ff], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16a156e7a039a985], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8184/overcommit-2 to capi-kali-md-0-76b6798f7f-5n8xl] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16a156e80eae55a4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16a156e811f445a1], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16a156e8257a9108], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16a156e7a05c226a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8184/overcommit-3 to capi-kali-md-0-76b6798f7f-7jvhm] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16a156e80e061f4c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16a156e81178678e], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16a156e8254ecf58], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16a156e7a0accfd9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8184/overcommit-4 to capi-kali-md-0-76b6798f7f-7jvhm] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16a156e80ecea6fc], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16a156e8137c5451], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16a156e82173a334], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16a156e7a0fbeef2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8184/overcommit-5 to capi-kali-md-0-76b6798f7f-7jvhm] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16a156e80d9cb748], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16a156e811409eb0], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16a156e825757946], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16a156e7a15cbcd9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8184/overcommit-6 to capi-kali-md-0-76b6798f7f-5n8xl] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16a156e80ba853f1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16a156e80e3c0aa3], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16a156e81872e5e7], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16a156e7a1a08376], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8184/overcommit-7 to capi-kali-md-0-76b6798f7f-7jvhm] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16a156e80ec82296], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16a156e813813c96], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16a156e8252ce512], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16a156e7a1d901a2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8184/overcommit-8 to capi-kali-md-0-76b6798f7f-5n8xl] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16a156e80e237475], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16a156e812808a11], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16a156e824eb2037], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16a156e7a20193fa], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8184/overcommit-9 to capi-kali-md-0-76b6798f7f-5n8xl] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16a156e80e2428d0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16a156e812578b02], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16a156e821eac7bd], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16a156e9fbbc9e8d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient ephemeral-storage.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:41:44.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8184" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:11.244 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":12,"completed":9,"skipped":3650,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:41:44.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 Sep 3 14:41:44.086: INFO: Waiting up to 1m0s for all nodes to be ready Sep 3 14:42:44.123: INFO: Waiting for terminating namespaces to be deleted... Sep 3 14:42:44.127: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Sep 3 14:42:44.142: INFO: 18 / 18 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Sep 3 14:42:44.142: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 Sep 3 14:42:44.151: INFO: ComputeCPUMemFraction for node: capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:42:44.151: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:42:44.151: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:42:44.151: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:42:44.151: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:42:44.151: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:42:44.151: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:42:44.151: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:42:44.151: INFO: Node: capi-kali-md-0-76b6798f7f-5n8xl, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Sep 3 14:42:44.151: INFO: Node: capi-kali-md-0-76b6798f7f-5n8xl, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 Sep 3 14:42:44.151: INFO: ComputeCPUMemFraction for node: capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:42:44.151: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:42:44.151: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:42:44.151: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:42:44.151: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:42:44.151: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:42:44.151: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:42:44.151: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:42:44.151: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:42:44.151: INFO: Pod for on the node: local-path-provisioner-556d4466c8-khwq6, Cpu: 100, Mem: 209715200 Sep 3 14:42:44.151: INFO: Node: capi-kali-md-0-76b6798f7f-7jvhm, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Sep 3 14:42:44.151: INFO: Node: capi-kali-md-0-76b6798f7f-7jvhm, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 Sep 3 14:42:44.163: INFO: Waiting for running... Sep 3 14:42:49.219: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Sep 3 14:42:54.281: INFO: ComputeCPUMemFraction for node: capi-kali-md-0-76b6798f7f-5n8xl Sep 3 14:42:54.281: INFO: Pod for on the node: a65f16dc-594e-4703-a02f-405905f8732c-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:42:54.281: INFO: Pod for on the node: a65f16dc-594e-4703-a02f-405905f8732c-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:42:54.281: INFO: Pod for on the node: a65f16dc-594e-4703-a02f-405905f8732c-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:42:54.281: INFO: Pod for on the node: a65f16dc-594e-4703-a02f-405905f8732c-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:42:54.281: INFO: Pod for on the node: a65f16dc-594e-4703-a02f-405905f8732c-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:42:54.281: INFO: Pod for on the node: a65f16dc-594e-4703-a02f-405905f8732c-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:42:54.281: INFO: Pod for on the node: a65f16dc-594e-4703-a02f-405905f8732c-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:42:54.281: INFO: Pod for on the node: a65f16dc-594e-4703-a02f-405905f8732c-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:42:54.281: INFO: Node: capi-kali-md-0-76b6798f7f-5n8xl, totalRequestedCPUResource: 351300, cpuAllocatableMil: 88000, cpuFraction: 1 Sep 3 14:42:54.281: INFO: Node: capi-kali-md-0-76b6798f7f-5n8xl, totalRequestedMemResource: 268986875904, memAllocatableVal: 67430219776, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Sep 3 14:42:54.281: INFO: ComputeCPUMemFraction for node: capi-kali-md-0-76b6798f7f-7jvhm Sep 3 14:42:54.281: INFO: Pod for on the node: a65f16dc-594e-4703-a02f-405905f8732c-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:42:54.281: INFO: Pod for on the node: a65f16dc-594e-4703-a02f-405905f8732c-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:42:54.281: INFO: Pod for on the node: a65f16dc-594e-4703-a02f-405905f8732c-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:42:54.281: INFO: Pod for on the node: a65f16dc-594e-4703-a02f-405905f8732c-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:42:54.281: INFO: Pod for on the node: a65f16dc-594e-4703-a02f-405905f8732c-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:42:54.281: INFO: Pod for on the node: a65f16dc-594e-4703-a02f-405905f8732c-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:42:54.281: INFO: Pod for on the node: a65f16dc-594e-4703-a02f-405905f8732c-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:42:54.281: INFO: Pod for on the node: a65f16dc-594e-4703-a02f-405905f8732c-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:42:54.281: INFO: Pod for on the node: a65f16dc-594e-4703-a02f-405905f8732c-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:42:54.281: INFO: Pod for on the node: a65f16dc-594e-4703-a02f-405905f8732c-0, Cpu: 43900, Mem: 33610252288 Sep 3 14:42:54.281: INFO: Node: capi-kali-md-0-76b6798f7f-7jvhm, totalRequestedCPUResource: 439100, cpuAllocatableMil: 88000, cpuFraction: 1 Sep 3 14:42:54.281: INFO: Node: capi-kali-md-0-76b6798f7f-7jvhm, totalRequestedMemResource: 336207380480, memAllocatableVal: 67430219776, memFraction: 1 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-422 to 1 STEP: Verify the pods should not scheduled to the node: capi-kali-md-0-76b6798f7f-5n8xl STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-422, will wait for the garbage collector to delete the pods Sep 3 14:43:00.471: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 6.291339ms Sep 3 14:43:00.971: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 500.280563ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:43:23.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-422" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:99.644 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":12,"completed":10,"skipped":3699,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:43:23.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 3 14:43:23.734: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 3 14:43:23.744: INFO: Waiting for terminating namespaces to be deleted... Sep 3 14:43:23.747: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-5n8xl before test Sep 3 14:43:23.754: INFO: chaos-daemon-tzn7z from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:43:23.754: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:43:23.754: INFO: coredns-f9fd979d6-qdhsv from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:43:23.754: INFO: Container coredns ready: true, restart count 0 Sep 3 14:43:23.754: INFO: create-loop-devs-qjl7t from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:43:23.754: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:43:23.754: INFO: kindnet-55d6f from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:43:23.754: INFO: Container kindnet-cni ready: true, restart count 12 Sep 3 14:43:23.754: INFO: kube-proxy-lqr9t from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:43:23.754: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:43:23.754: INFO: tune-sysctls-wz9ls from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:43:23.754: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:43:23.754: INFO: chaos-operator-ce-5754fd4b69-crx4p from litmus started at 2021-08-31 13:03:04 +0000 UTC (1 container statuses recorded) Sep 3 14:43:23.754: INFO: Container chaos-operator ready: true, restart count 0 Sep 3 14:43:23.754: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-7jvhm before test Sep 3 14:43:23.761: INFO: chaos-controller-manager-69c479c674-2scf8 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:43:23.761: INFO: Container chaos-mesh ready: true, restart count 0 Sep 3 14:43:23.761: INFO: chaos-daemon-6lv64 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:43:23.761: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:43:23.762: INFO: dockerd from default started at 2021-08-31 13:02:43 +0000 UTC (1 container statuses recorded) Sep 3 14:43:23.762: INFO: Container dockerd ready: true, restart count 0 Sep 3 14:43:23.762: INFO: coredns-f9fd979d6-45cv5 from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:43:23.762: INFO: Container coredns ready: true, restart count 0 Sep 3 14:43:23.762: INFO: create-loop-devs-4jkpj from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:43:23.762: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:43:23.762: INFO: kindnet-7cmgn from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:43:23.762: INFO: Container kindnet-cni ready: true, restart count 16 Sep 3 14:43:23.762: INFO: kube-proxy-h8v9x from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:43:23.762: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:43:23.762: INFO: tune-sysctls-mv2h6 from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:43:23.762: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:43:23.762: INFO: local-path-provisioner-556d4466c8-khwq6 from local-path-storage started at 2021-08-30 14:58:21 +0000 UTC (1 container statuses recorded) Sep 3 14:43:23.762: INFO: Container local-path-provisioner ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node capi-kali-md-0-76b6798f7f-5n8xl STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node capi-kali-md-0-76b6798f7f-7jvhm STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:43:29.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3585" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:6.206 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":12,"completed":11,"skipped":3999,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 3 14:43:29.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 3 14:43:29.941: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 3 14:43:29.951: INFO: Waiting for terminating namespaces to be deleted... Sep 3 14:43:29.954: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-5n8xl before test Sep 3 14:43:29.960: INFO: chaos-daemon-tzn7z from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:43:29.960: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:43:29.960: INFO: coredns-f9fd979d6-qdhsv from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:43:29.960: INFO: Container coredns ready: true, restart count 0 Sep 3 14:43:29.960: INFO: create-loop-devs-qjl7t from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:43:29.960: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:43:29.960: INFO: kindnet-55d6f from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:43:29.960: INFO: Container kindnet-cni ready: true, restart count 12 Sep 3 14:43:29.960: INFO: kube-proxy-lqr9t from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:43:29.960: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:43:29.960: INFO: tune-sysctls-wz9ls from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:43:29.960: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:43:29.960: INFO: chaos-operator-ce-5754fd4b69-crx4p from litmus started at 2021-08-31 13:03:04 +0000 UTC (1 container statuses recorded) Sep 3 14:43:29.960: INFO: Container chaos-operator ready: true, restart count 0 Sep 3 14:43:29.960: INFO: rs-e2e-pts-filter-cgr2s from sched-pred-3585 started at 2021-09-03 14:43:27 +0000 UTC (1 container statuses recorded) Sep 3 14:43:29.960: INFO: Container e2e-pts-filter ready: true, restart count 0 Sep 3 14:43:29.960: INFO: rs-e2e-pts-filter-flpnf from sched-pred-3585 started at 2021-09-03 14:43:27 +0000 UTC (1 container statuses recorded) Sep 3 14:43:29.960: INFO: Container e2e-pts-filter ready: true, restart count 0 Sep 3 14:43:29.960: INFO: Logging pods the apiserver thinks is on node capi-kali-md-0-76b6798f7f-7jvhm before test Sep 3 14:43:29.968: INFO: chaos-controller-manager-69c479c674-2scf8 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:43:29.968: INFO: Container chaos-mesh ready: true, restart count 0 Sep 3 14:43:29.968: INFO: chaos-daemon-6lv64 from default started at 2021-08-31 13:05:14 +0000 UTC (1 container statuses recorded) Sep 3 14:43:29.968: INFO: Container chaos-daemon ready: true, restart count 0 Sep 3 14:43:29.968: INFO: dockerd from default started at 2021-08-31 13:02:43 +0000 UTC (1 container statuses recorded) Sep 3 14:43:29.969: INFO: Container dockerd ready: true, restart count 0 Sep 3 14:43:29.969: INFO: coredns-f9fd979d6-45cv5 from kube-system started at 2021-08-30 14:57:52 +0000 UTC (1 container statuses recorded) Sep 3 14:43:29.969: INFO: Container coredns ready: true, restart count 0 Sep 3 14:43:29.969: INFO: create-loop-devs-4jkpj from kube-system started at 2021-08-30 14:57:49 +0000 UTC (1 container statuses recorded) Sep 3 14:43:29.969: INFO: Container loopdev ready: true, restart count 0 Sep 3 14:43:29.969: INFO: kindnet-7cmgn from kube-system started at 2021-08-30 14:57:23 +0000 UTC (1 container statuses recorded) Sep 3 14:43:29.969: INFO: Container kindnet-cni ready: true, restart count 16 Sep 3 14:43:29.969: INFO: kube-proxy-h8v9x from kube-system started at 2021-08-30 14:57:13 +0000 UTC (1 container statuses recorded) Sep 3 14:43:29.969: INFO: Container kube-proxy ready: true, restart count 0 Sep 3 14:43:29.969: INFO: tune-sysctls-mv2h6 from kube-system started at 2021-08-30 14:57:54 +0000 UTC (1 container statuses recorded) Sep 3 14:43:29.969: INFO: Container setsysctls ready: true, restart count 0 Sep 3 14:43:29.969: INFO: local-path-provisioner-556d4466c8-khwq6 from local-path-storage started at 2021-08-30 14:58:21 +0000 UTC (1 container statuses recorded) Sep 3 14:43:29.969: INFO: Container local-path-provisioner ready: true, restart count 0 Sep 3 14:43:29.969: INFO: rs-e2e-pts-filter-4wf2f from sched-pred-3585 started at 2021-09-03 14:43:27 +0000 UTC (1 container statuses recorded) Sep 3 14:43:29.969: INFO: Container e2e-pts-filter ready: true, restart count 0 Sep 3 14:43:29.969: INFO: rs-e2e-pts-filter-8jgpb from sched-pred-3585 started at 2021-09-03 14:43:27 +0000 UTC (1 container statuses recorded) Sep 3 14:43:29.969: INFO: Container e2e-pts-filter ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16a15702e2e4e3a4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 3 14:43:30.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5546" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":12,"completed":12,"skipped":4025,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSep 3 14:43:31.021: INFO: Running AfterSuite actions on all nodes Sep 3 14:43:31.021: INFO: Running AfterSuite actions on node 1 Sep 3 14:43:31.021: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":12,"completed":12,"skipped":5472,"failed":0} Ran 12 of 5484 Specs in 526.999 seconds SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 5472 Skipped PASS Ginkgo ran 1 suite in 8m48.530498422s Test Suite Passed