I0521 16:58:02.624382 17 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0521 16:58:02.624641 17 e2e.go:129] Starting e2e run "70737d8e-2ac6-4e39-b66e-9408b242a453" on Ginkgo node 1 {"msg":"Test Suite starting","total":12,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1621616281 - Will randomize all specs Will run 12 of 5484 specs May 21 16:58:02.656: INFO: >>> kubeConfig: /root/.kube/config May 21 16:58:02.660: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 21 16:58:02.687: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 21 16:58:02.737: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 21 16:58:02.737: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 21 16:58:02.737: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 21 16:58:02.747: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) May 21 16:58:02.747: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 21 16:58:02.747: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) May 21 16:58:02.747: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 21 16:58:02.747: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) May 21 16:58:02.747: INFO: e2e test version: v1.19.11 May 21 16:58:02.749: INFO: kube-apiserver version: v1.19.11 May 21 16:58:02.749: INFO: >>> kubeConfig: /root/.kube/config May 21 16:58:02.754: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:58:02.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred May 21 16:58:02.789: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 21 16:58:02.797: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 21 16:58:02.801: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 21 16:58:02.809: INFO: Waiting for terminating namespaces to be deleted... May 21 16:58:02.812: INFO: Logging pods the apiserver thinks is on node kali-worker before test May 21 16:58:02.820: INFO: coredns-f9fd979d6-qkdvz from kube-system started at 2021-05-21 16:56:24 +0000 UTC (1 container statuses recorded) May 21 16:58:02.820: INFO: Container coredns ready: true, restart count 0 May 21 16:58:02.820: INFO: create-loop-devs-7pddm from kube-system started at 2021-05-21 16:42:50 +0000 UTC (1 container statuses recorded) May 21 16:58:02.820: INFO: Container loopdev ready: true, restart count 0 May 21 16:58:02.820: INFO: kindnet-vlqfv from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:58:02.820: INFO: Container kindnet-cni ready: true, restart count 0 May 21 16:58:02.820: INFO: kube-multus-ds-l25rh from kube-system started at 2021-05-21 16:42:30 +0000 UTC (1 container statuses recorded) May 21 16:58:02.820: INFO: Container kube-multus ready: true, restart count 0 May 21 16:58:02.820: INFO: kube-proxy-ggwmf from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:58:02.820: INFO: Container kube-proxy ready: true, restart count 0 May 21 16:58:02.820: INFO: tune-sysctls-zvk52 from kube-system started at 2021-05-21 16:42:19 +0000 UTC (1 container statuses recorded) May 21 16:58:02.820: INFO: Container setsysctls ready: true, restart count 0 May 21 16:58:02.820: INFO: speaker-5m2zf from metallb-system started at 2021-05-21 16:42:18 +0000 UTC (1 container statuses recorded) May 21 16:58:02.820: INFO: Container speaker ready: true, restart count 0 May 21 16:58:02.820: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test May 21 16:58:02.829: INFO: coredns-f9fd979d6-nn288 from kube-system started at 2021-05-21 16:56:24 +0000 UTC (1 container statuses recorded) May 21 16:58:02.829: INFO: Container coredns ready: true, restart count 0 May 21 16:58:02.829: INFO: create-loop-devs-26xt8 from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:58:02.829: INFO: Container loopdev ready: true, restart count 0 May 21 16:58:02.829: INFO: kindnet-n7f64 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:58:02.829: INFO: Container kindnet-cni ready: true, restart count 0 May 21 16:58:02.829: INFO: kube-multus-ds-zr9pd from kube-system started at 2021-05-21 15:16:02 +0000 UTC (1 container statuses recorded) May 21 16:58:02.829: INFO: Container kube-multus ready: true, restart count 0 May 21 16:58:02.829: INFO: kube-proxy-87457 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:58:02.829: INFO: Container kube-proxy ready: true, restart count 0 May 21 16:58:02.829: INFO: tune-sysctls-m54ts from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:58:02.829: INFO: Container setsysctls ready: true, restart count 0 May 21 16:58:02.829: INFO: dashboard-metrics-scraper-79c5968bdc-mqgxg from kubernetes-dashboard started at 2021-05-21 16:42:15 +0000 UTC (1 container statuses recorded) May 21 16:58:02.829: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 21 16:58:02.829: INFO: kubernetes-dashboard-9f9799597-fr9hn from kubernetes-dashboard started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:58:02.829: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 21 16:58:02.829: INFO: controller-675995489c-scdfn from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 16:58:02.829: INFO: Container controller ready: true, restart count 0 May 21 16:58:02.829: INFO: speaker-kjmdr from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 16:58:02.829: INFO: Container speaker ready: true, restart count 0 May 21 16:58:02.829: INFO: contour-6648989f79-b9qzx from projectcontour started at 2021-05-21 16:42:15 +0000 UTC (1 container statuses recorded) May 21 16:58:02.829: INFO: Container contour ready: true, restart count 0 May 21 16:58:02.829: INFO: contour-6648989f79-c2th6 from projectcontour started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:58:02.829: INFO: Container contour ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1681236ae5c7aecc], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:58:03.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6812" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":12,"completed":1,"skipped":503,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:58:03.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 May 21 16:58:03.909: INFO: Waiting up to 1m0s for all nodes to be ready May 21 16:59:03.951: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:307 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node kali-worker. STEP: Apply 10 fake resource to node kali-worker2. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:325 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node kali-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:59:32.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1999" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:88.355 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:301 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":12,"completed":2,"skipped":516,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:59:32.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 21 16:59:32.263: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 21 16:59:32.270: INFO: Waiting for terminating namespaces to be deleted... May 21 16:59:32.274: INFO: Logging pods the apiserver thinks is on node kali-worker before test May 21 16:59:32.283: INFO: coredns-f9fd979d6-qkdvz from kube-system started at 2021-05-21 16:56:24 +0000 UTC (1 container statuses recorded) May 21 16:59:32.283: INFO: Container coredns ready: true, restart count 0 May 21 16:59:32.283: INFO: create-loop-devs-7pddm from kube-system started at 2021-05-21 16:42:50 +0000 UTC (1 container statuses recorded) May 21 16:59:32.283: INFO: Container loopdev ready: true, restart count 0 May 21 16:59:32.283: INFO: kindnet-vlqfv from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:59:32.283: INFO: Container kindnet-cni ready: true, restart count 0 May 21 16:59:32.283: INFO: kube-multus-ds-l25rh from kube-system started at 2021-05-21 16:42:30 +0000 UTC (1 container statuses recorded) May 21 16:59:32.283: INFO: Container kube-multus ready: true, restart count 0 May 21 16:59:32.283: INFO: kube-proxy-ggwmf from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:59:32.283: INFO: Container kube-proxy ready: true, restart count 0 May 21 16:59:32.283: INFO: tune-sysctls-zvk52 from kube-system started at 2021-05-21 16:42:19 +0000 UTC (1 container statuses recorded) May 21 16:59:32.283: INFO: Container setsysctls ready: true, restart count 0 May 21 16:59:32.283: INFO: speaker-5m2zf from metallb-system started at 2021-05-21 16:42:18 +0000 UTC (1 container statuses recorded) May 21 16:59:32.283: INFO: Container speaker ready: true, restart count 0 May 21 16:59:32.283: INFO: high from sched-preemption-1999 started at 2021-05-21 16:59:15 +0000 UTC (1 container statuses recorded) May 21 16:59:32.283: INFO: Container high ready: true, restart count 0 May 21 16:59:32.283: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test May 21 16:59:32.293: INFO: coredns-f9fd979d6-nn288 from kube-system started at 2021-05-21 16:56:24 +0000 UTC (1 container statuses recorded) May 21 16:59:32.293: INFO: Container coredns ready: true, restart count 0 May 21 16:59:32.293: INFO: create-loop-devs-26xt8 from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:59:32.293: INFO: Container loopdev ready: true, restart count 0 May 21 16:59:32.293: INFO: kindnet-n7f64 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:59:32.293: INFO: Container kindnet-cni ready: true, restart count 0 May 21 16:59:32.293: INFO: kube-multus-ds-zr9pd from kube-system started at 2021-05-21 15:16:02 +0000 UTC (1 container statuses recorded) May 21 16:59:32.293: INFO: Container kube-multus ready: true, restart count 0 May 21 16:59:32.293: INFO: kube-proxy-87457 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:59:32.293: INFO: Container kube-proxy ready: true, restart count 0 May 21 16:59:32.293: INFO: tune-sysctls-m54ts from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:59:32.293: INFO: Container setsysctls ready: true, restart count 0 May 21 16:59:32.293: INFO: dashboard-metrics-scraper-79c5968bdc-mqgxg from kubernetes-dashboard started at 2021-05-21 16:42:15 +0000 UTC (1 container statuses recorded) May 21 16:59:32.293: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 21 16:59:32.293: INFO: kubernetes-dashboard-9f9799597-fr9hn from kubernetes-dashboard started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:59:32.293: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 21 16:59:32.293: INFO: controller-675995489c-scdfn from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 16:59:32.293: INFO: Container controller ready: true, restart count 0 May 21 16:59:32.293: INFO: speaker-kjmdr from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 16:59:32.293: INFO: Container speaker ready: true, restart count 0 May 21 16:59:32.293: INFO: contour-6648989f79-b9qzx from projectcontour started at 2021-05-21 16:42:15 +0000 UTC (1 container statuses recorded) May 21 16:59:32.293: INFO: Container contour ready: true, restart count 0 May 21 16:59:32.293: INFO: contour-6648989f79-c2th6 from projectcontour started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:59:32.293: INFO: Container contour ready: true, restart count 0 May 21 16:59:32.293: INFO: low-1 from sched-preemption-1999 started at 2021-05-21 16:59:18 +0000 UTC (1 container statuses recorded) May 21 16:59:32.293: INFO: Container low-1 ready: true, restart count 0 May 21 16:59:32.293: INFO: medium from sched-preemption-1999 started at 2021-05-21 16:59:30 +0000 UTC (1 container statuses recorded) May 21 16:59:32.293: INFO: Container medium ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-4839846a-3cce-4013-862e-9660663c81ed=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-8db3cc6b-d4f3-4a14-88cb-77eb413e59ed testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-8db3cc6b-d4f3-4a14-88cb-77eb413e59ed off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-8db3cc6b-d4f3-4a14-88cb-77eb413e59ed STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-4839846a-3cce-4013-862e-9660663c81ed=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:59:36.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2158" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":12,"completed":3,"skipped":865,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:59:36.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 21 16:59:36.446: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 21 16:59:36.455: INFO: Waiting for terminating namespaces to be deleted... May 21 16:59:36.459: INFO: Logging pods the apiserver thinks is on node kali-worker before test May 21 16:59:36.467: INFO: coredns-f9fd979d6-qkdvz from kube-system started at 2021-05-21 16:56:24 +0000 UTC (1 container statuses recorded) May 21 16:59:36.467: INFO: Container coredns ready: true, restart count 0 May 21 16:59:36.467: INFO: create-loop-devs-7pddm from kube-system started at 2021-05-21 16:42:50 +0000 UTC (1 container statuses recorded) May 21 16:59:36.468: INFO: Container loopdev ready: true, restart count 0 May 21 16:59:36.468: INFO: kindnet-vlqfv from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:59:36.468: INFO: Container kindnet-cni ready: true, restart count 0 May 21 16:59:36.468: INFO: kube-multus-ds-l25rh from kube-system started at 2021-05-21 16:42:30 +0000 UTC (1 container statuses recorded) May 21 16:59:36.468: INFO: Container kube-multus ready: true, restart count 0 May 21 16:59:36.468: INFO: kube-proxy-ggwmf from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:59:36.468: INFO: Container kube-proxy ready: true, restart count 0 May 21 16:59:36.468: INFO: tune-sysctls-zvk52 from kube-system started at 2021-05-21 16:42:19 +0000 UTC (1 container statuses recorded) May 21 16:59:36.468: INFO: Container setsysctls ready: true, restart count 0 May 21 16:59:36.468: INFO: speaker-5m2zf from metallb-system started at 2021-05-21 16:42:18 +0000 UTC (1 container statuses recorded) May 21 16:59:36.468: INFO: Container speaker ready: true, restart count 0 May 21 16:59:36.468: INFO: with-tolerations from sched-pred-2158 started at 2021-05-21 16:59:34 +0000 UTC (1 container statuses recorded) May 21 16:59:36.468: INFO: Container with-tolerations ready: true, restart count 0 May 21 16:59:36.468: INFO: high from sched-preemption-1999 started at 2021-05-21 16:59:15 +0000 UTC (1 container statuses recorded) May 21 16:59:36.468: INFO: Container high ready: true, restart count 0 May 21 16:59:36.468: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test May 21 16:59:36.476: INFO: coredns-f9fd979d6-nn288 from kube-system started at 2021-05-21 16:56:24 +0000 UTC (1 container statuses recorded) May 21 16:59:36.476: INFO: Container coredns ready: true, restart count 0 May 21 16:59:36.476: INFO: create-loop-devs-26xt8 from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:59:36.476: INFO: Container loopdev ready: true, restart count 0 May 21 16:59:36.476: INFO: kindnet-n7f64 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:59:36.476: INFO: Container kindnet-cni ready: true, restart count 0 May 21 16:59:36.476: INFO: kube-multus-ds-zr9pd from kube-system started at 2021-05-21 15:16:02 +0000 UTC (1 container statuses recorded) May 21 16:59:36.476: INFO: Container kube-multus ready: true, restart count 0 May 21 16:59:36.476: INFO: kube-proxy-87457 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 16:59:36.476: INFO: Container kube-proxy ready: true, restart count 0 May 21 16:59:36.476: INFO: tune-sysctls-m54ts from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 16:59:36.476: INFO: Container setsysctls ready: true, restart count 0 May 21 16:59:36.476: INFO: dashboard-metrics-scraper-79c5968bdc-mqgxg from kubernetes-dashboard started at 2021-05-21 16:42:15 +0000 UTC (1 container statuses recorded) May 21 16:59:36.476: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 21 16:59:36.476: INFO: kubernetes-dashboard-9f9799597-fr9hn from kubernetes-dashboard started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:59:36.476: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 21 16:59:36.476: INFO: controller-675995489c-scdfn from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 16:59:36.476: INFO: Container controller ready: true, restart count 0 May 21 16:59:36.476: INFO: speaker-kjmdr from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 16:59:36.476: INFO: Container speaker ready: true, restart count 0 May 21 16:59:36.476: INFO: contour-6648989f79-b9qzx from projectcontour started at 2021-05-21 16:42:15 +0000 UTC (1 container statuses recorded) May 21 16:59:36.476: INFO: Container contour ready: true, restart count 0 May 21 16:59:36.476: INFO: contour-6648989f79-c2th6 from projectcontour started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 16:59:36.476: INFO: Container contour ready: true, restart count 0 May 21 16:59:36.476: INFO: low-1 from sched-preemption-1999 started at 2021-05-21 16:59:18 +0000 UTC (1 container statuses recorded) May 21 16:59:36.476: INFO: Container low-1 ready: true, restart count 0 May 21 16:59:36.476: INFO: medium from sched-preemption-1999 started at 2021-05-21 16:59:30 +0000 UTC (1 container statuses recorded) May 21 16:59:36.476: INFO: Container medium ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4ff26a6c-0d42-4952-a6e4-fc982c738df7 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-4ff26a6c-0d42-4952-a6e4-fc982c738df7 off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-4ff26a6c-0d42-4952-a6e4-fc982c738df7 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 16:59:40.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3453" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":12,"completed":4,"skipped":1557,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 16:59:40.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 21 16:59:40.578: INFO: Waiting up to 1m0s for all nodes to be ready May 21 17:00:40.624: INFO: Waiting for terminating namespaces to be deleted... May 21 17:00:40.627: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 21 17:00:40.643: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 21 17:00:40.643: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:350 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 May 21 17:00:44.734: INFO: ComputeCPUMemFraction for node: kali-worker May 21 17:00:44.734: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:00:44.734: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:00:44.734: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:00:44.734: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:00:44.734: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:00:44.734: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:00:44.734: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:00:44.734: INFO: Node: kali-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 21 17:00:44.734: INFO: Node: kali-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 21 17:00:44.734: INFO: ComputeCPUMemFraction for node: kali-worker2 May 21 17:00:44.734: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:00:44.734: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:00:44.734: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:00:44.734: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:00:44.734: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:00:44.734: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:00:44.734: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:00:44.734: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:00:44.734: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:00:44.734: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:00:44.734: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:00:44.734: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:00:44.734: INFO: Node: kali-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 21 17:00:44.735: INFO: Node: kali-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 21 17:00:44.739: INFO: Waiting for running... May 21 17:00:49.796: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 21 17:00:54.863: INFO: ComputeCPUMemFraction for node: kali-worker May 21 17:00:54.863: INFO: Pod for on the node: d32c359c-4d23-4472-8fa6-7461c8a4b829-0, Cpu: 43900, Mem: 33610252288 May 21 17:00:54.863: INFO: Pod for on the node: d32c359c-4d23-4472-8fa6-7461c8a4b829-0, Cpu: 43900, Mem: 33610252288 May 21 17:00:54.863: INFO: Pod for on the node: d32c359c-4d23-4472-8fa6-7461c8a4b829-0, Cpu: 43900, Mem: 33610252288 May 21 17:00:54.863: INFO: Pod for on the node: d32c359c-4d23-4472-8fa6-7461c8a4b829-0, Cpu: 43900, Mem: 33610252288 May 21 17:00:54.863: INFO: Pod for on the node: d32c359c-4d23-4472-8fa6-7461c8a4b829-0, Cpu: 43900, Mem: 33610252288 May 21 17:00:54.863: INFO: Pod for on the node: d32c359c-4d23-4472-8fa6-7461c8a4b829-0, Cpu: 43900, Mem: 33610252288 May 21 17:00:54.863: INFO: Pod for on the node: d32c359c-4d23-4472-8fa6-7461c8a4b829-0, Cpu: 43900, Mem: 33610252288 May 21 17:00:54.863: INFO: Pod for on the node: d32c359c-4d23-4472-8fa6-7461c8a4b829-0, Cpu: 43900, Mem: 33610252288 May 21 17:00:54.863: INFO: Node: kali-worker, totalRequestedCPUResource: 351300, cpuAllocatableMil: 88000, cpuFraction: 1 May 21 17:00:54.863: INFO: Node: kali-worker, totalRequestedMemResource: 268986875904, memAllocatableVal: 67430219776, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 21 17:00:54.863: INFO: ComputeCPUMemFraction for node: kali-worker2 May 21 17:00:54.863: INFO: Pod for on the node: d32c359c-4d23-4472-8fa6-7461c8a4b829-0, Cpu: 43900, Mem: 33610252288 May 21 17:00:54.863: INFO: Pod for on the node: d32c359c-4d23-4472-8fa6-7461c8a4b829-0, Cpu: 43900, Mem: 33610252288 May 21 17:00:54.863: INFO: Pod for on the node: d32c359c-4d23-4472-8fa6-7461c8a4b829-0, Cpu: 43900, Mem: 33610252288 May 21 17:00:54.863: INFO: Pod for on the node: d32c359c-4d23-4472-8fa6-7461c8a4b829-0, Cpu: 43900, Mem: 33610252288 May 21 17:00:54.863: INFO: Pod for on the node: d32c359c-4d23-4472-8fa6-7461c8a4b829-0, Cpu: 43900, Mem: 33610252288 May 21 17:00:54.863: INFO: Pod for on the node: d32c359c-4d23-4472-8fa6-7461c8a4b829-0, Cpu: 43900, Mem: 33610252288 May 21 17:00:54.863: INFO: Pod for on the node: d32c359c-4d23-4472-8fa6-7461c8a4b829-0, Cpu: 43900, Mem: 33610252288 May 21 17:00:54.863: INFO: Pod for on the node: d32c359c-4d23-4472-8fa6-7461c8a4b829-0, Cpu: 43900, Mem: 33610252288 May 21 17:00:54.863: INFO: Pod for on the node: d32c359c-4d23-4472-8fa6-7461c8a4b829-0, Cpu: 43900, Mem: 33610252288 May 21 17:00:54.863: INFO: Pod for on the node: d32c359c-4d23-4472-8fa6-7461c8a4b829-0, Cpu: 43900, Mem: 33610252288 May 21 17:00:54.863: INFO: Pod for on the node: d32c359c-4d23-4472-8fa6-7461c8a4b829-0, Cpu: 43900, Mem: 33610252288 May 21 17:00:54.863: INFO: Pod for on the node: d32c359c-4d23-4472-8fa6-7461c8a4b829-0, Cpu: 43900, Mem: 33610252288 May 21 17:00:54.863: INFO: Pod for on the node: d32c359c-4d23-4472-8fa6-7461c8a4b829-0, Cpu: 43900, Mem: 33610252288 May 21 17:00:54.863: INFO: Node: kali-worker2, totalRequestedCPUResource: 570800, cpuAllocatableMil: 88000, cpuFraction: 1 May 21 17:00:54.863: INFO: Node: kali-worker2, totalRequestedMemResource: 437038137344, memAllocatableVal: 67430219776, memFraction: 1 STEP: Run a ReplicaSet with 4 replicas on node "kali-worker" STEP: Verifying if the test-pod lands on node "kali-worker2" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:358 STEP: removing the label kubernetes.io/e2e-pts-score off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node kali-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:01:00.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-7007" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:80.402 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:346 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":12,"completed":5,"skipped":1619,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:01:00.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 21 17:01:00.989: INFO: Waiting up to 1m0s for all nodes to be ready May 21 17:02:01.033: INFO: Waiting for terminating namespaces to be deleted... May 21 17:02:01.037: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 21 17:02:01.052: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 21 17:02:01.052: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 May 21 17:02:01.067: INFO: ComputeCPUMemFraction for node: kali-worker May 21 17:02:01.067: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:02:01.067: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:02:01.067: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:02:01.067: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:02:01.067: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:02:01.067: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:02:01.067: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:02:01.067: INFO: Node: kali-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 21 17:02:01.067: INFO: Node: kali-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 21 17:02:01.067: INFO: ComputeCPUMemFraction for node: kali-worker2 May 21 17:02:01.067: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:02:01.067: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:02:01.067: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:02:01.067: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:02:01.067: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:02:01.067: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:02:01.067: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:02:01.067: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:02:01.067: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:02:01.067: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:02:01.067: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:02:01.067: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:02:01.067: INFO: Node: kali-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 21 17:02:01.067: INFO: Node: kali-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 21 17:02:01.077: INFO: Waiting for running... May 21 17:02:06.133: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 21 17:02:11.206: INFO: ComputeCPUMemFraction for node: kali-worker May 21 17:02:11.206: INFO: Pod for on the node: 86946dc5-9ffe-42be-9292-60eac4996e0b-0, Cpu: 43900, Mem: 33610252288 May 21 17:02:11.206: INFO: Pod for on the node: 86946dc5-9ffe-42be-9292-60eac4996e0b-0, Cpu: 43900, Mem: 33610252288 May 21 17:02:11.206: INFO: Pod for on the node: 86946dc5-9ffe-42be-9292-60eac4996e0b-0, Cpu: 43900, Mem: 33610252288 May 21 17:02:11.206: INFO: Pod for on the node: 86946dc5-9ffe-42be-9292-60eac4996e0b-0, Cpu: 43900, Mem: 33610252288 May 21 17:02:11.206: INFO: Pod for on the node: 86946dc5-9ffe-42be-9292-60eac4996e0b-0, Cpu: 43900, Mem: 33610252288 May 21 17:02:11.206: INFO: Pod for on the node: 86946dc5-9ffe-42be-9292-60eac4996e0b-0, Cpu: 43900, Mem: 33610252288 May 21 17:02:11.206: INFO: Pod for on the node: 86946dc5-9ffe-42be-9292-60eac4996e0b-0, Cpu: 43900, Mem: 33610252288 May 21 17:02:11.206: INFO: Pod for on the node: 86946dc5-9ffe-42be-9292-60eac4996e0b-0, Cpu: 43900, Mem: 33610252288 May 21 17:02:11.206: INFO: Node: kali-worker, totalRequestedCPUResource: 351300, cpuAllocatableMil: 88000, cpuFraction: 1 May 21 17:02:11.206: INFO: Node: kali-worker, totalRequestedMemResource: 268986875904, memAllocatableVal: 67430219776, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 21 17:02:11.206: INFO: ComputeCPUMemFraction for node: kali-worker2 May 21 17:02:11.206: INFO: Pod for on the node: 86946dc5-9ffe-42be-9292-60eac4996e0b-0, Cpu: 43900, Mem: 33610252288 May 21 17:02:11.206: INFO: Pod for on the node: 86946dc5-9ffe-42be-9292-60eac4996e0b-0, Cpu: 43900, Mem: 33610252288 May 21 17:02:11.206: INFO: Pod for on the node: 86946dc5-9ffe-42be-9292-60eac4996e0b-0, Cpu: 43900, Mem: 33610252288 May 21 17:02:11.206: INFO: Pod for on the node: 86946dc5-9ffe-42be-9292-60eac4996e0b-0, Cpu: 43900, Mem: 33610252288 May 21 17:02:11.206: INFO: Pod for on the node: 86946dc5-9ffe-42be-9292-60eac4996e0b-0, Cpu: 43900, Mem: 33610252288 May 21 17:02:11.206: INFO: Pod for on the node: 86946dc5-9ffe-42be-9292-60eac4996e0b-0, Cpu: 43900, Mem: 33610252288 May 21 17:02:11.206: INFO: Pod for on the node: 86946dc5-9ffe-42be-9292-60eac4996e0b-0, Cpu: 43900, Mem: 33610252288 May 21 17:02:11.206: INFO: Pod for on the node: 86946dc5-9ffe-42be-9292-60eac4996e0b-0, Cpu: 43900, Mem: 33610252288 May 21 17:02:11.206: INFO: Pod for on the node: 86946dc5-9ffe-42be-9292-60eac4996e0b-0, Cpu: 43900, Mem: 33610252288 May 21 17:02:11.206: INFO: Pod for on the node: 86946dc5-9ffe-42be-9292-60eac4996e0b-0, Cpu: 43900, Mem: 33610252288 May 21 17:02:11.206: INFO: Pod for on the node: 86946dc5-9ffe-42be-9292-60eac4996e0b-0, Cpu: 43900, Mem: 33610252288 May 21 17:02:11.206: INFO: Pod for on the node: 86946dc5-9ffe-42be-9292-60eac4996e0b-0, Cpu: 43900, Mem: 33610252288 May 21 17:02:11.206: INFO: Pod for on the node: 86946dc5-9ffe-42be-9292-60eac4996e0b-0, Cpu: 43900, Mem: 33610252288 May 21 17:02:11.206: INFO: Node: kali-worker2, totalRequestedCPUResource: 570800, cpuAllocatableMil: 88000, cpuFraction: 1 May 21 17:02:11.206: INFO: Node: kali-worker2, totalRequestedMemResource: 437038137344, memAllocatableVal: 67430219776, memFraction: 1 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-5263 to 1 STEP: Verify the pods should not scheduled to the node: kali-worker STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-5263, will wait for the garbage collector to delete the pods May 21 17:02:17.380: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 5.852027ms May 21 17:02:17.980: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 600.282224ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:02:40.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-5263" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:99.547 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":12,"completed":6,"skipped":2132,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:02:40.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 21 17:02:40.548: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 21 17:02:40.557: INFO: Waiting for terminating namespaces to be deleted... May 21 17:02:40.560: INFO: Logging pods the apiserver thinks is on node kali-worker before test May 21 17:02:40.569: INFO: coredns-f9fd979d6-qkdvz from kube-system started at 2021-05-21 16:56:24 +0000 UTC (1 container statuses recorded) May 21 17:02:40.569: INFO: Container coredns ready: true, restart count 0 May 21 17:02:40.569: INFO: create-loop-devs-7pddm from kube-system started at 2021-05-21 16:42:50 +0000 UTC (1 container statuses recorded) May 21 17:02:40.569: INFO: Container loopdev ready: true, restart count 0 May 21 17:02:40.569: INFO: kindnet-vlqfv from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 17:02:40.569: INFO: Container kindnet-cni ready: true, restart count 0 May 21 17:02:40.569: INFO: kube-multus-ds-l25rh from kube-system started at 2021-05-21 16:42:30 +0000 UTC (1 container statuses recorded) May 21 17:02:40.569: INFO: Container kube-multus ready: true, restart count 0 May 21 17:02:40.569: INFO: kube-proxy-ggwmf from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 17:02:40.569: INFO: Container kube-proxy ready: true, restart count 0 May 21 17:02:40.569: INFO: tune-sysctls-zvk52 from kube-system started at 2021-05-21 16:42:19 +0000 UTC (1 container statuses recorded) May 21 17:02:40.569: INFO: Container setsysctls ready: true, restart count 0 May 21 17:02:40.569: INFO: speaker-5m2zf from metallb-system started at 2021-05-21 16:42:18 +0000 UTC (1 container statuses recorded) May 21 17:02:40.569: INFO: Container speaker ready: true, restart count 0 May 21 17:02:40.569: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test May 21 17:02:40.578: INFO: coredns-f9fd979d6-nn288 from kube-system started at 2021-05-21 16:56:24 +0000 UTC (1 container statuses recorded) May 21 17:02:40.578: INFO: Container coredns ready: true, restart count 0 May 21 17:02:40.578: INFO: create-loop-devs-26xt8 from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 17:02:40.578: INFO: Container loopdev ready: true, restart count 0 May 21 17:02:40.578: INFO: kindnet-n7f64 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 17:02:40.579: INFO: Container kindnet-cni ready: true, restart count 0 May 21 17:02:40.579: INFO: kube-multus-ds-zr9pd from kube-system started at 2021-05-21 15:16:02 +0000 UTC (1 container statuses recorded) May 21 17:02:40.579: INFO: Container kube-multus ready: true, restart count 0 May 21 17:02:40.579: INFO: kube-proxy-87457 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 17:02:40.579: INFO: Container kube-proxy ready: true, restart count 0 May 21 17:02:40.579: INFO: tune-sysctls-m54ts from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 17:02:40.579: INFO: Container setsysctls ready: true, restart count 0 May 21 17:02:40.579: INFO: dashboard-metrics-scraper-79c5968bdc-mqgxg from kubernetes-dashboard started at 2021-05-21 16:42:15 +0000 UTC (1 container statuses recorded) May 21 17:02:40.579: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 21 17:02:40.579: INFO: kubernetes-dashboard-9f9799597-fr9hn from kubernetes-dashboard started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 17:02:40.579: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 21 17:02:40.579: INFO: controller-675995489c-scdfn from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 17:02:40.579: INFO: Container controller ready: true, restart count 0 May 21 17:02:40.579: INFO: speaker-kjmdr from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 17:02:40.579: INFO: Container speaker ready: true, restart count 0 May 21 17:02:40.579: INFO: contour-6648989f79-b9qzx from projectcontour started at 2021-05-21 16:42:15 +0000 UTC (1 container statuses recorded) May 21 17:02:40.579: INFO: Container contour ready: true, restart count 0 May 21 17:02:40.579: INFO: contour-6648989f79-c2th6 from projectcontour started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 17:02:40.579: INFO: Container contour ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node kali-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:02:46.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5943" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:6.171 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":12,"completed":7,"skipped":2840,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:02:46.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 21 17:02:46.720: INFO: Waiting up to 1m0s for all nodes to be ready May 21 17:03:46.762: INFO: Waiting for terminating namespaces to be deleted... May 21 17:03:46.765: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 21 17:03:46.779: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 21 17:03:46.779: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname May 21 17:03:48.818: INFO: ComputeCPUMemFraction for node: kali-worker May 21 17:03:48.818: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:48.818: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:48.818: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:48.818: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:48.818: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:48.818: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:48.818: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:48.818: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:48.818: INFO: Node: kali-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 21 17:03:48.818: INFO: Node: kali-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 21 17:03:48.818: INFO: ComputeCPUMemFraction for node: kali-worker2 May 21 17:03:48.818: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:48.818: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:48.818: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:48.818: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:48.818: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:48.818: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:48.818: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:48.818: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:48.818: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:48.818: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:48.818: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:48.818: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:48.818: INFO: Node: kali-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 21 17:03:48.818: INFO: Node: kali-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 21 17:03:48.823: INFO: Waiting for running... May 21 17:03:53.881: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 21 17:03:58.949: INFO: ComputeCPUMemFraction for node: kali-worker May 21 17:03:58.949: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.949: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.949: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.949: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.949: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.949: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.949: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.949: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.950: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.950: INFO: Node: kali-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 21 17:03:58.950: INFO: Node: kali-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 21 17:03:58.950: INFO: ComputeCPUMemFraction for node: kali-worker2 May 21 17:03:58.950: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.950: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.950: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.950: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.950: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.950: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.950: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.950: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.950: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.950: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.950: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.950: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.950: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 21 17:03:58.950: INFO: Node: kali-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 21 17:03:58.950: INFO: Node: kali-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:04:10.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-972" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:84.300 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":12,"completed":8,"skipped":3363,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:04:11.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 21 17:04:11.031: INFO: Waiting up to 1m0s for all nodes to be ready May 21 17:05:11.075: INFO: Waiting for terminating namespaces to be deleted... May 21 17:05:11.078: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 21 17:05:11.091: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 21 17:05:11.091: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 May 21 17:05:11.106: INFO: ComputeCPUMemFraction for node: kali-worker May 21 17:05:11.106: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:05:11.106: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:05:11.106: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:05:11.106: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:05:11.106: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:05:11.106: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:05:11.106: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:05:11.106: INFO: Node: kali-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 21 17:05:11.106: INFO: Node: kali-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 21 17:05:11.106: INFO: ComputeCPUMemFraction for node: kali-worker2 May 21 17:05:11.106: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:05:11.106: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:05:11.106: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:05:11.106: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:05:11.106: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:05:11.106: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:05:11.106: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:05:11.106: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:05:11.106: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:05:11.106: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:05:11.106: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:05:11.106: INFO: Pod for on the node: envoy-788lx, Cpu: 200, Mem: 419430400 May 21 17:05:11.106: INFO: Node: kali-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 21 17:05:11.106: INFO: Node: kali-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 21 17:05:11.117: INFO: Waiting for running... May 21 17:05:16.174: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 21 17:05:21.242: INFO: ComputeCPUMemFraction for node: kali-worker May 21 17:05:21.242: INFO: Pod for on the node: 93c16a79-06b4-4e34-a570-9c21c7033eee-0, Cpu: 43900, Mem: 33610252288 May 21 17:05:21.242: INFO: Pod for on the node: 93c16a79-06b4-4e34-a570-9c21c7033eee-0, Cpu: 43900, Mem: 33610252288 May 21 17:05:21.242: INFO: Pod for on the node: 93c16a79-06b4-4e34-a570-9c21c7033eee-0, Cpu: 43900, Mem: 33610252288 May 21 17:05:21.242: INFO: Pod for on the node: 93c16a79-06b4-4e34-a570-9c21c7033eee-0, Cpu: 43900, Mem: 33610252288 May 21 17:05:21.242: INFO: Pod for on the node: 93c16a79-06b4-4e34-a570-9c21c7033eee-0, Cpu: 43900, Mem: 33610252288 May 21 17:05:21.242: INFO: Pod for on the node: 93c16a79-06b4-4e34-a570-9c21c7033eee-0, Cpu: 43900, Mem: 33610252288 May 21 17:05:21.242: INFO: Pod for on the node: 93c16a79-06b4-4e34-a570-9c21c7033eee-0, Cpu: 43900, Mem: 33610252288 May 21 17:05:21.242: INFO: Pod for on the node: 93c16a79-06b4-4e34-a570-9c21c7033eee-0, Cpu: 43900, Mem: 33610252288 May 21 17:05:21.242: INFO: Node: kali-worker, totalRequestedCPUResource: 351300, cpuAllocatableMil: 88000, cpuFraction: 1 May 21 17:05:21.242: INFO: Node: kali-worker, totalRequestedMemResource: 268986875904, memAllocatableVal: 67430219776, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 21 17:05:21.242: INFO: ComputeCPUMemFraction for node: kali-worker2 May 21 17:05:21.242: INFO: Pod for on the node: 93c16a79-06b4-4e34-a570-9c21c7033eee-0, Cpu: 43900, Mem: 33610252288 May 21 17:05:21.242: INFO: Pod for on the node: 93c16a79-06b4-4e34-a570-9c21c7033eee-0, Cpu: 43900, Mem: 33610252288 May 21 17:05:21.242: INFO: Pod for on the node: 93c16a79-06b4-4e34-a570-9c21c7033eee-0, Cpu: 43900, Mem: 33610252288 May 21 17:05:21.242: INFO: Pod for on the node: 93c16a79-06b4-4e34-a570-9c21c7033eee-0, Cpu: 43900, Mem: 33610252288 May 21 17:05:21.242: INFO: Pod for on the node: 93c16a79-06b4-4e34-a570-9c21c7033eee-0, Cpu: 43900, Mem: 33610252288 May 21 17:05:21.242: INFO: Pod for on the node: 93c16a79-06b4-4e34-a570-9c21c7033eee-0, Cpu: 43900, Mem: 33610252288 May 21 17:05:21.242: INFO: Pod for on the node: 93c16a79-06b4-4e34-a570-9c21c7033eee-0, Cpu: 43900, Mem: 33610252288 May 21 17:05:21.242: INFO: Pod for on the node: 93c16a79-06b4-4e34-a570-9c21c7033eee-0, Cpu: 43900, Mem: 33610252288 May 21 17:05:21.242: INFO: Pod for on the node: 93c16a79-06b4-4e34-a570-9c21c7033eee-0, Cpu: 43900, Mem: 33610252288 May 21 17:05:21.242: INFO: Pod for on the node: 93c16a79-06b4-4e34-a570-9c21c7033eee-0, Cpu: 43900, Mem: 33610252288 May 21 17:05:21.242: INFO: Pod for on the node: 93c16a79-06b4-4e34-a570-9c21c7033eee-0, Cpu: 43900, Mem: 33610252288 May 21 17:05:21.242: INFO: Pod for on the node: 93c16a79-06b4-4e34-a570-9c21c7033eee-0, Cpu: 43900, Mem: 33610252288 May 21 17:05:21.242: INFO: Pod for on the node: 93c16a79-06b4-4e34-a570-9c21c7033eee-0, Cpu: 43900, Mem: 33610252288 May 21 17:05:21.242: INFO: Node: kali-worker2, totalRequestedCPUResource: 570800, cpuAllocatableMil: 88000, cpuFraction: 1 May 21 17:05:21.242: INFO: Node: kali-worker2, totalRequestedMemResource: 437038137344, memAllocatableVal: 67430219776, memFraction: 1 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-b6cd05b4-810a-4f03-b46d-7858cb433a25=testing-taint-value-a1caf341-8703-4400-99a1-37c89fb8601b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-2249a2bc-8ce2-48d3-9015-e250fc3f07fd=testing-taint-value-8fa9d10b-66fb-4611-942e-e6e59a966c23:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-e71e84b5-7203-4369-ad04-e1a64daf8fdb=testing-taint-value-dc5aa9f0-c13d-4687-a996-9eefeec6397a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-6062d0a8-223e-4fb5-b94f-bff99eda638f=testing-taint-value-d8299211-354e-451e-8898-e87eeb405cdf:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-d33cce1f-2bc0-4b6b-96a6-3e791a8368f3=testing-taint-value-cbadecfd-f081-4698-95a2-e0de41814660:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-c19e0b6b-c644-484b-88c8-d21e61eb7eab=testing-taint-value-4bfef6e2-4dce-4d4a-81d3-d8bd1679638f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-73b8b8b0-f4e5-47ea-8e45-d09e5beef885=testing-taint-value-c901f9db-6f3d-454a-9a70-c7a975951aaa:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-da40472c-f030-4909-b734-8a0e7045ff7d=testing-taint-value-8288c6b9-b740-4b46-b22e-e635672cdffa:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-4c5af9fa-5956-403e-b815-673b242c3a76=testing-taint-value-aeab432f-ad98-484f-9628-33881a8585c1:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-7a9c0121-1f5c-4a6a-b794-1967fbaa8128=testing-taint-value-1a371c64-d081-49cc-ae98-5a4f0f508935:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-e29c43ee-d9de-496a-8c2c-c970cc789c7d=testing-taint-value-0728714a-c06e-4866-aaf5-0c190f672eaf:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-34cc0aa0-3ac2-4e2a-8dad-733e39b460dd=testing-taint-value-1055c9d0-21bd-44ea-9924-5af452fd7068:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-bd979f64-147c-4c75-a237-b89d282df783=testing-taint-value-790590a5-17d0-496e-9136-6af847f9b293:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-8b32b295-43da-4d97-b473-11ff2eccf572=testing-taint-value-9b8d5139-c59c-45f1-8b95-c6c69cd7f0f3:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-e4237bb5-4477-4ea9-a1c6-ba124c0ddc54=testing-taint-value-767921c2-aa08-40f2-baf4-697fd29cfe41:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-332e21ea-bf1f-43d8-ae0c-cebda15ab673=testing-taint-value-7d11e233-6aa6-4888-ae9f-523c65ba600c:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-19289bd2-2ef8-40af-8e70-e8a09deb94a2=testing-taint-value-c49704e7-8957-482f-8804-666b2a849635:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-838e70ca-baca-40e6-a077-268dc196d54d=testing-taint-value-ab469d20-fdeb-4170-a02f-43538ad590f7:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-94a1f3a3-668e-4305-8081-074a1064118a=testing-taint-value-70842478-fa5d-40e7-939d-7893fc3ad5d5:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-682af982-fcdb-4012-9859-9183a1c5f462=testing-taint-value-3f93aa98-5ec4-4d23-bd9c-01bd6df21fd3:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-682af982-fcdb-4012-9859-9183a1c5f462=testing-taint-value-3f93aa98-5ec4-4d23-bd9c-01bd6df21fd3:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-94a1f3a3-668e-4305-8081-074a1064118a=testing-taint-value-70842478-fa5d-40e7-939d-7893fc3ad5d5:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-838e70ca-baca-40e6-a077-268dc196d54d=testing-taint-value-ab469d20-fdeb-4170-a02f-43538ad590f7:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-19289bd2-2ef8-40af-8e70-e8a09deb94a2=testing-taint-value-c49704e7-8957-482f-8804-666b2a849635:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-332e21ea-bf1f-43d8-ae0c-cebda15ab673=testing-taint-value-7d11e233-6aa6-4888-ae9f-523c65ba600c:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-e4237bb5-4477-4ea9-a1c6-ba124c0ddc54=testing-taint-value-767921c2-aa08-40f2-baf4-697fd29cfe41:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-8b32b295-43da-4d97-b473-11ff2eccf572=testing-taint-value-9b8d5139-c59c-45f1-8b95-c6c69cd7f0f3:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-bd979f64-147c-4c75-a237-b89d282df783=testing-taint-value-790590a5-17d0-496e-9136-6af847f9b293:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-34cc0aa0-3ac2-4e2a-8dad-733e39b460dd=testing-taint-value-1055c9d0-21bd-44ea-9924-5af452fd7068:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-e29c43ee-d9de-496a-8c2c-c970cc789c7d=testing-taint-value-0728714a-c06e-4866-aaf5-0c190f672eaf:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-7a9c0121-1f5c-4a6a-b794-1967fbaa8128=testing-taint-value-1a371c64-d081-49cc-ae98-5a4f0f508935:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-4c5af9fa-5956-403e-b815-673b242c3a76=testing-taint-value-aeab432f-ad98-484f-9628-33881a8585c1:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-da40472c-f030-4909-b734-8a0e7045ff7d=testing-taint-value-8288c6b9-b740-4b46-b22e-e635672cdffa:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-73b8b8b0-f4e5-47ea-8e45-d09e5beef885=testing-taint-value-c901f9db-6f3d-454a-9a70-c7a975951aaa:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-c19e0b6b-c644-484b-88c8-d21e61eb7eab=testing-taint-value-4bfef6e2-4dce-4d4a-81d3-d8bd1679638f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-d33cce1f-2bc0-4b6b-96a6-3e791a8368f3=testing-taint-value-cbadecfd-f081-4698-95a2-e0de41814660:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-6062d0a8-223e-4fb5-b94f-bff99eda638f=testing-taint-value-d8299211-354e-451e-8898-e87eeb405cdf:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-e71e84b5-7203-4369-ad04-e1a64daf8fdb=testing-taint-value-dc5aa9f0-c13d-4687-a996-9eefeec6397a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-2249a2bc-8ce2-48d3-9015-e250fc3f07fd=testing-taint-value-8fa9d10b-66fb-4611-942e-e6e59a966c23:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-b6cd05b4-810a-4f03-b46d-7858cb433a25=testing-taint-value-a1caf341-8703-4400-99a1-37c89fb8601b:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:05:31.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-38" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:80.091 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":12,"completed":9,"skipped":3726,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:05:31.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 21 17:05:31.132: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 21 17:05:31.140: INFO: Waiting for terminating namespaces to be deleted... May 21 17:05:31.143: INFO: Logging pods the apiserver thinks is on node kali-worker before test May 21 17:05:31.158: INFO: coredns-f9fd979d6-qkdvz from kube-system started at 2021-05-21 16:56:24 +0000 UTC (1 container statuses recorded) May 21 17:05:31.158: INFO: Container coredns ready: true, restart count 0 May 21 17:05:31.158: INFO: create-loop-devs-7pddm from kube-system started at 2021-05-21 16:42:50 +0000 UTC (1 container statuses recorded) May 21 17:05:31.158: INFO: Container loopdev ready: true, restart count 0 May 21 17:05:31.158: INFO: kindnet-vlqfv from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 17:05:31.158: INFO: Container kindnet-cni ready: true, restart count 0 May 21 17:05:31.158: INFO: kube-multus-ds-l25rh from kube-system started at 2021-05-21 16:42:30 +0000 UTC (1 container statuses recorded) May 21 17:05:31.158: INFO: Container kube-multus ready: true, restart count 0 May 21 17:05:31.158: INFO: kube-proxy-ggwmf from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 17:05:31.158: INFO: Container kube-proxy ready: true, restart count 0 May 21 17:05:31.158: INFO: tune-sysctls-zvk52 from kube-system started at 2021-05-21 16:42:19 +0000 UTC (1 container statuses recorded) May 21 17:05:31.158: INFO: Container setsysctls ready: true, restart count 0 May 21 17:05:31.158: INFO: speaker-5m2zf from metallb-system started at 2021-05-21 16:42:18 +0000 UTC (1 container statuses recorded) May 21 17:05:31.158: INFO: Container speaker ready: true, restart count 0 May 21 17:05:31.158: INFO: with-tolerations from sched-priority-38 started at 2021-05-21 17:05:21 +0000 UTC (1 container statuses recorded) May 21 17:05:31.158: INFO: Container with-tolerations ready: true, restart count 0 May 21 17:05:31.158: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test May 21 17:05:31.164: INFO: coredns-f9fd979d6-nn288 from kube-system started at 2021-05-21 16:56:24 +0000 UTC (1 container statuses recorded) May 21 17:05:31.164: INFO: Container coredns ready: true, restart count 0 May 21 17:05:31.164: INFO: create-loop-devs-26xt8 from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 17:05:31.164: INFO: Container loopdev ready: true, restart count 0 May 21 17:05:31.164: INFO: kindnet-n7f64 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 17:05:31.164: INFO: Container kindnet-cni ready: true, restart count 0 May 21 17:05:31.164: INFO: kube-multus-ds-zr9pd from kube-system started at 2021-05-21 15:16:02 +0000 UTC (1 container statuses recorded) May 21 17:05:31.164: INFO: Container kube-multus ready: true, restart count 0 May 21 17:05:31.164: INFO: kube-proxy-87457 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 17:05:31.164: INFO: Container kube-proxy ready: true, restart count 0 May 21 17:05:31.164: INFO: tune-sysctls-m54ts from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 17:05:31.164: INFO: Container setsysctls ready: true, restart count 0 May 21 17:05:31.164: INFO: dashboard-metrics-scraper-79c5968bdc-mqgxg from kubernetes-dashboard started at 2021-05-21 16:42:15 +0000 UTC (1 container statuses recorded) May 21 17:05:31.164: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 21 17:05:31.164: INFO: kubernetes-dashboard-9f9799597-fr9hn from kubernetes-dashboard started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 17:05:31.164: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 21 17:05:31.164: INFO: controller-675995489c-scdfn from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 17:05:31.164: INFO: Container controller ready: true, restart count 0 May 21 17:05:31.164: INFO: speaker-kjmdr from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 17:05:31.164: INFO: Container speaker ready: true, restart count 0 May 21 17:05:31.164: INFO: contour-6648989f79-b9qzx from projectcontour started at 2021-05-21 16:42:15 +0000 UTC (1 container statuses recorded) May 21 17:05:31.164: INFO: Container contour ready: true, restart count 0 May 21 17:05:31.164: INFO: contour-6648989f79-c2th6 from projectcontour started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 17:05:31.164: INFO: Container contour ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-1515a08c-0470-4119-ab2e-4408cac0a488.168123d3c1c35f5c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Normal], Name = [filler-pod-1515a08c-0470-4119-ab2e-4408cac0a488.168123d4616ab924], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5314/filler-pod-1515a08c-0470-4119-ab2e-4408cac0a488 to kali-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-1515a08c-0470-4119-ab2e-4408cac0a488.168123d4856d6cde], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.169/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-1515a08c-0470-4119-ab2e-4408cac0a488.168123d48e2dddfc], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-1515a08c-0470-4119-ab2e-4408cac0a488.168123d48f538dde], Reason = [Created], Message = [Created container filler-pod-1515a08c-0470-4119-ab2e-4408cac0a488] STEP: Considering event: Type = [Normal], Name = [filler-pod-1515a08c-0470-4119-ab2e-4408cac0a488.168123d4991cae00], Reason = [Started], Message = [Started container filler-pod-1515a08c-0470-4119-ab2e-4408cac0a488] STEP: Considering event: Type = [Normal], Name = [without-label.168123d3485522e1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5314/without-label to kali-worker2] STEP: Considering event: Type = [Normal], Name = [without-label.168123d36b53cb62], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.168/24]] STEP: Considering event: Type = [Normal], Name = [without-label.168123d3778d2cba], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-label.168123d3795538d2], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.168123d38168e35a], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.168123d3c08bfb52], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [without-label.168123d3cfc9babb], Reason = [Failed], Message = [Error: failed to get sandbox container task: no running task found: task c63d7335de59a32c0f29803520445d3788b539248b64b6ab54ad0cfe804f2ea2 not found: not found] STEP: Considering event: Type = [Warning], Name = [additional-pod77f290c8-391a-4c40-81f8-b0356c636abd.168123d528816f5e], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 Insufficient example.com/beardsecond.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:05:40.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5314" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:9.163 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":12,"completed":10,"skipped":4272,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:05:40.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 21 17:05:40.300: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 21 17:05:40.307: INFO: Waiting for terminating namespaces to be deleted... May 21 17:05:40.311: INFO: Logging pods the apiserver thinks is on node kali-worker before test May 21 17:05:40.318: INFO: coredns-f9fd979d6-qkdvz from kube-system started at 2021-05-21 16:56:24 +0000 UTC (1 container statuses recorded) May 21 17:05:40.318: INFO: Container coredns ready: true, restart count 0 May 21 17:05:40.318: INFO: create-loop-devs-7pddm from kube-system started at 2021-05-21 16:42:50 +0000 UTC (1 container statuses recorded) May 21 17:05:40.318: INFO: Container loopdev ready: true, restart count 0 May 21 17:05:40.318: INFO: kindnet-vlqfv from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 17:05:40.318: INFO: Container kindnet-cni ready: true, restart count 0 May 21 17:05:40.318: INFO: kube-multus-ds-l25rh from kube-system started at 2021-05-21 16:42:30 +0000 UTC (1 container statuses recorded) May 21 17:05:40.318: INFO: Container kube-multus ready: true, restart count 0 May 21 17:05:40.318: INFO: kube-proxy-ggwmf from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 17:05:40.318: INFO: Container kube-proxy ready: true, restart count 0 May 21 17:05:40.318: INFO: tune-sysctls-zvk52 from kube-system started at 2021-05-21 16:42:19 +0000 UTC (1 container statuses recorded) May 21 17:05:40.318: INFO: Container setsysctls ready: true, restart count 0 May 21 17:05:40.318: INFO: speaker-5m2zf from metallb-system started at 2021-05-21 16:42:18 +0000 UTC (1 container statuses recorded) May 21 17:05:40.318: INFO: Container speaker ready: true, restart count 0 May 21 17:05:40.318: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test May 21 17:05:40.326: INFO: coredns-f9fd979d6-nn288 from kube-system started at 2021-05-21 16:56:24 +0000 UTC (1 container statuses recorded) May 21 17:05:40.326: INFO: Container coredns ready: true, restart count 0 May 21 17:05:40.326: INFO: create-loop-devs-26xt8 from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 17:05:40.327: INFO: Container loopdev ready: true, restart count 0 May 21 17:05:40.327: INFO: kindnet-n7f64 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 17:05:40.327: INFO: Container kindnet-cni ready: true, restart count 0 May 21 17:05:40.327: INFO: kube-multus-ds-zr9pd from kube-system started at 2021-05-21 15:16:02 +0000 UTC (1 container statuses recorded) May 21 17:05:40.327: INFO: Container kube-multus ready: true, restart count 0 May 21 17:05:40.327: INFO: kube-proxy-87457 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 17:05:40.327: INFO: Container kube-proxy ready: true, restart count 0 May 21 17:05:40.327: INFO: tune-sysctls-m54ts from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 17:05:40.327: INFO: Container setsysctls ready: true, restart count 0 May 21 17:05:40.327: INFO: dashboard-metrics-scraper-79c5968bdc-mqgxg from kubernetes-dashboard started at 2021-05-21 16:42:15 +0000 UTC (1 container statuses recorded) May 21 17:05:40.327: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 21 17:05:40.327: INFO: kubernetes-dashboard-9f9799597-fr9hn from kubernetes-dashboard started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 17:05:40.327: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 21 17:05:40.327: INFO: controller-675995489c-scdfn from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 17:05:40.327: INFO: Container controller ready: true, restart count 0 May 21 17:05:40.327: INFO: speaker-kjmdr from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 17:05:40.327: INFO: Container speaker ready: true, restart count 0 May 21 17:05:40.327: INFO: contour-6648989f79-b9qzx from projectcontour started at 2021-05-21 16:42:15 +0000 UTC (1 container statuses recorded) May 21 17:05:40.327: INFO: Container contour ready: true, restart count 0 May 21 17:05:40.327: INFO: contour-6648989f79-c2th6 from projectcontour started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 17:05:40.327: INFO: Container contour ready: true, restart count 0 May 21 17:05:40.327: INFO: filler-pod-1515a08c-0470-4119-ab2e-4408cac0a488 from sched-pred-5314 started at 2021-05-21 17:05:35 +0000 UTC (1 container statuses recorded) May 21 17:05:40.327: INFO: Container filler-pod-1515a08c-0470-4119-ab2e-4408cac0a488 ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 May 21 17:05:46.408: INFO: Pod coredns-f9fd979d6-nn288 requesting local ephemeral resource =0 on Node kali-worker2 May 21 17:05:46.408: INFO: Pod coredns-f9fd979d6-qkdvz requesting local ephemeral resource =0 on Node kali-worker May 21 17:05:46.408: INFO: Pod create-loop-devs-26xt8 requesting local ephemeral resource =0 on Node kali-worker2 May 21 17:05:46.408: INFO: Pod create-loop-devs-7pddm requesting local ephemeral resource =0 on Node kali-worker May 21 17:05:46.408: INFO: Pod kindnet-n7f64 requesting local ephemeral resource =0 on Node kali-worker2 May 21 17:05:46.408: INFO: Pod kindnet-vlqfv requesting local ephemeral resource =0 on Node kali-worker May 21 17:05:46.408: INFO: Pod kube-multus-ds-l25rh requesting local ephemeral resource =0 on Node kali-worker May 21 17:05:46.408: INFO: Pod kube-multus-ds-zr9pd requesting local ephemeral resource =0 on Node kali-worker2 May 21 17:05:46.408: INFO: Pod kube-proxy-87457 requesting local ephemeral resource =0 on Node kali-worker2 May 21 17:05:46.408: INFO: Pod kube-proxy-ggwmf requesting local ephemeral resource =0 on Node kali-worker May 21 17:05:46.408: INFO: Pod tune-sysctls-m54ts requesting local ephemeral resource =0 on Node kali-worker2 May 21 17:05:46.408: INFO: Pod tune-sysctls-zvk52 requesting local ephemeral resource =0 on Node kali-worker May 21 17:05:46.408: INFO: Pod dashboard-metrics-scraper-79c5968bdc-mqgxg requesting local ephemeral resource =0 on Node kali-worker2 May 21 17:05:46.408: INFO: Pod kubernetes-dashboard-9f9799597-fr9hn requesting local ephemeral resource =0 on Node kali-worker2 May 21 17:05:46.408: INFO: Pod controller-675995489c-scdfn requesting local ephemeral resource =0 on Node kali-worker2 May 21 17:05:46.408: INFO: Pod speaker-5m2zf requesting local ephemeral resource =0 on Node kali-worker May 21 17:05:46.408: INFO: Pod speaker-kjmdr requesting local ephemeral resource =0 on Node kali-worker2 May 21 17:05:46.408: INFO: Pod contour-6648989f79-b9qzx requesting local ephemeral resource =0 on Node kali-worker2 May 21 17:05:46.408: INFO: Pod contour-6648989f79-c2th6 requesting local ephemeral resource =0 on Node kali-worker2 May 21 17:05:46.409: INFO: Using pod capacity: 47063248896 May 21 17:05:46.409: INFO: Node: kali-worker has local ephemeral resource allocatable: 470632488960 May 21 17:05:46.409: INFO: Node: kali-worker2 has local ephemeral resource allocatable: 470632488960 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one May 21 17:05:46.485: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.168123d6d452ce34], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5312/overcommit-0 to kali-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-0.168123d72d70bdad], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.38/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-0.168123d7464cfdb4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-0.168123d74a734364], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.168123d760a6efd6], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.168123d6d499eeb0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5312/overcommit-1 to kali-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-1.168123d72d747d6f], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.39/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-1.168123d745d5c569], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-1.168123d749ef7b2e], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.168123d760a2a820], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.168123d6d68df382], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5312/overcommit-10 to kali-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-10.168123d72d94f9c9], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.42/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-10.168123d745d75858], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-10.168123d7494e83da], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.168123d760562cd3], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.168123d6d6d1dbed], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5312/overcommit-11 to kali-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-11.168123d72da26719], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.175/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-11.168123d7474b55f2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-11.168123d74abe4f12], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.168123d760a31657], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.168123d6d70bf538], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5312/overcommit-12 to kali-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-12.168123d72da8fa3c], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.41/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-12.168123d745740622], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-12.168123d749a6252a], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.168123d760777c9c], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.168123d6d7296323], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5312/overcommit-13 to kali-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-13.168123d72dd4f3f0], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.45/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-13.168123d746958a77], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-13.168123d74a7c5e37], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.168123d760be1fd0], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.168123d6d7551a96], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5312/overcommit-14 to kali-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-14.168123d72da8d317], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.172/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-14.168123d745c156b0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-14.168123d749412d5b], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.168123d760834f81], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.168123d6d783f51b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5312/overcommit-15 to kali-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-15.168123d72dc8d8b8], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.176/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-15.168123d746671a45], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-15.168123d74a457016], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.168123d7605c4b0e], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.168123d6d7bf2f93], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5312/overcommit-16 to kali-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-16.168123d72fa6d0c1], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.179/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-16.168123d746a53f8b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-16.168123d74ab8a4b8], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.168123d760eca3ee], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.168123d6d7e2584d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5312/overcommit-17 to kali-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-17.168123d72dd05ad3], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.174/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-17.168123d7463acc0e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-17.168123d74a1d0f7f], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.168123d7612b3af3], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.168123d6d818c2b2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5312/overcommit-18 to kali-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-18.168123d72e0cc296], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.170/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-18.168123d74651eb42], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-18.168123d74a12cfa0], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.168123d760c9c0a0], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.168123d6d84f04bf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5312/overcommit-19 to kali-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-19.168123d72da32856], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.178/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-19.168123d7467b5b74], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-19.168123d74a7a5c8f], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.168123d760c9b94f], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.168123d6d4ddf0f1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5312/overcommit-2 to kali-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.168123d72d6d2d54], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.173/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-2.168123d74685e19e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-2.168123d74aa1572a], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.168123d760cdb7dd], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.168123d6d5253907], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5312/overcommit-3 to kali-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-3.168123d72d6d83c5], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.37/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-3.168123d7473b75d6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-3.168123d74ac9f3ce], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.168123d760c8247c], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.168123d6d54fe2cd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5312/overcommit-4 to kali-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-4.168123d72da21172], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.46/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-4.168123d7462fb404], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-4.168123d74a5a1cde], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.168123d76068e73d], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.168123d6d5835f54], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5312/overcommit-5 to kali-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-5.168123d72dc99dbd], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.43/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-5.168123d746b0b1b1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-5.168123d74ab589ff], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.168123d760bed166], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.168123d6d5c8d673], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5312/overcommit-6 to kali-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-6.168123d72d7349f6], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.44/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-6.168123d7462e9163], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-6.168123d74a8f0ea8], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.168123d76040e35e], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.168123d6d5e6638e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5312/overcommit-7 to kali-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-7.168123d72dd5da97], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.40/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-7.168123d745cc0338], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-7.168123d749beaf30], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.168123d76059fded], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.168123d6d6204088], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5312/overcommit-8 to kali-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.168123d72dac01b8], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.177/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-8.168123d746074513], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-8.168123d749e34c83], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.168123d761001486], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.168123d6d65aaec8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5312/overcommit-9 to kali-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.168123d72d72eaaa], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.171/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-9.168123d74601b9fc], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-9.168123d749a68c7f], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.168123d760a69531], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.168123d93047ae1a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient ephemeral-storage.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:05:57.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5312" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:17.300 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":12,"completed":11,"skipped":4877,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 17:05:57.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 21 17:05:57.610: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 21 17:05:57.618: INFO: Waiting for terminating namespaces to be deleted... May 21 17:05:57.621: INFO: Logging pods the apiserver thinks is on node kali-worker before test May 21 17:05:57.631: INFO: coredns-f9fd979d6-qkdvz from kube-system started at 2021-05-21 16:56:24 +0000 UTC (1 container statuses recorded) May 21 17:05:57.631: INFO: Container coredns ready: true, restart count 0 May 21 17:05:57.631: INFO: create-loop-devs-7pddm from kube-system started at 2021-05-21 16:42:50 +0000 UTC (1 container statuses recorded) May 21 17:05:57.631: INFO: Container loopdev ready: true, restart count 0 May 21 17:05:57.632: INFO: kindnet-vlqfv from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 17:05:57.632: INFO: Container kindnet-cni ready: true, restart count 0 May 21 17:05:57.632: INFO: kube-multus-ds-l25rh from kube-system started at 2021-05-21 16:42:30 +0000 UTC (1 container statuses recorded) May 21 17:05:57.632: INFO: Container kube-multus ready: true, restart count 0 May 21 17:05:57.632: INFO: kube-proxy-ggwmf from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 17:05:57.632: INFO: Container kube-proxy ready: true, restart count 0 May 21 17:05:57.632: INFO: tune-sysctls-zvk52 from kube-system started at 2021-05-21 16:42:19 +0000 UTC (1 container statuses recorded) May 21 17:05:57.632: INFO: Container setsysctls ready: true, restart count 0 May 21 17:05:57.632: INFO: speaker-5m2zf from metallb-system started at 2021-05-21 16:42:18 +0000 UTC (1 container statuses recorded) May 21 17:05:57.632: INFO: Container speaker ready: true, restart count 0 May 21 17:05:57.632: INFO: overcommit-0 from sched-pred-5312 started at 2021-05-21 17:05:46 +0000 UTC (1 container statuses recorded) May 21 17:05:57.632: INFO: Container overcommit-0 ready: true, restart count 0 May 21 17:05:57.632: INFO: overcommit-1 from sched-pred-5312 started at 2021-05-21 17:05:46 +0000 UTC (1 container statuses recorded) May 21 17:05:57.632: INFO: Container overcommit-1 ready: true, restart count 0 May 21 17:05:57.632: INFO: overcommit-10 from sched-pred-5312 started at 2021-05-21 17:05:46 +0000 UTC (1 container statuses recorded) May 21 17:05:57.632: INFO: Container overcommit-10 ready: true, restart count 0 May 21 17:05:57.632: INFO: overcommit-12 from sched-pred-5312 started at 2021-05-21 17:05:46 +0000 UTC (1 container statuses recorded) May 21 17:05:57.632: INFO: Container overcommit-12 ready: true, restart count 0 May 21 17:05:57.632: INFO: overcommit-13 from sched-pred-5312 started at 2021-05-21 17:05:46 +0000 UTC (1 container statuses recorded) May 21 17:05:57.632: INFO: Container overcommit-13 ready: true, restart count 0 May 21 17:05:57.632: INFO: overcommit-3 from sched-pred-5312 started at 2021-05-21 17:05:46 +0000 UTC (1 container statuses recorded) May 21 17:05:57.632: INFO: Container overcommit-3 ready: true, restart count 0 May 21 17:05:57.632: INFO: overcommit-4 from sched-pred-5312 started at 2021-05-21 17:05:46 +0000 UTC (1 container statuses recorded) May 21 17:05:57.632: INFO: Container overcommit-4 ready: true, restart count 0 May 21 17:05:57.632: INFO: overcommit-5 from sched-pred-5312 started at 2021-05-21 17:05:46 +0000 UTC (1 container statuses recorded) May 21 17:05:57.632: INFO: Container overcommit-5 ready: true, restart count 0 May 21 17:05:57.632: INFO: overcommit-6 from sched-pred-5312 started at 2021-05-21 17:05:46 +0000 UTC (1 container statuses recorded) May 21 17:05:57.632: INFO: Container overcommit-6 ready: true, restart count 0 May 21 17:05:57.632: INFO: overcommit-7 from sched-pred-5312 started at 2021-05-21 17:05:46 +0000 UTC (1 container statuses recorded) May 21 17:05:57.632: INFO: Container overcommit-7 ready: true, restart count 0 May 21 17:05:57.632: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test May 21 17:05:57.643: INFO: coredns-f9fd979d6-nn288 from kube-system started at 2021-05-21 16:56:24 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container coredns ready: true, restart count 0 May 21 17:05:57.643: INFO: create-loop-devs-26xt8 from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container loopdev ready: true, restart count 0 May 21 17:05:57.643: INFO: kindnet-n7f64 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container kindnet-cni ready: true, restart count 0 May 21 17:05:57.643: INFO: kube-multus-ds-zr9pd from kube-system started at 2021-05-21 15:16:02 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container kube-multus ready: true, restart count 0 May 21 17:05:57.643: INFO: kube-proxy-87457 from kube-system started at 2021-05-21 15:13:50 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container kube-proxy ready: true, restart count 0 May 21 17:05:57.643: INFO: tune-sysctls-m54ts from kube-system started at 2021-05-21 15:16:01 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container setsysctls ready: true, restart count 0 May 21 17:05:57.643: INFO: dashboard-metrics-scraper-79c5968bdc-mqgxg from kubernetes-dashboard started at 2021-05-21 16:42:15 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 21 17:05:57.643: INFO: kubernetes-dashboard-9f9799597-fr9hn from kubernetes-dashboard started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 21 17:05:57.643: INFO: controller-675995489c-scdfn from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container controller ready: true, restart count 0 May 21 17:05:57.643: INFO: speaker-kjmdr from metallb-system started at 2021-05-21 15:16:03 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container speaker ready: true, restart count 0 May 21 17:05:57.643: INFO: contour-6648989f79-b9qzx from projectcontour started at 2021-05-21 16:42:15 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container contour ready: true, restart count 0 May 21 17:05:57.643: INFO: contour-6648989f79-c2th6 from projectcontour started at 2021-05-21 15:16:05 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container contour ready: true, restart count 0 May 21 17:05:57.643: INFO: overcommit-11 from sched-pred-5312 started at 2021-05-21 17:05:46 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container overcommit-11 ready: true, restart count 0 May 21 17:05:57.643: INFO: overcommit-14 from sched-pred-5312 started at 2021-05-21 17:05:46 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container overcommit-14 ready: true, restart count 0 May 21 17:05:57.643: INFO: overcommit-15 from sched-pred-5312 started at 2021-05-21 17:05:46 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container overcommit-15 ready: true, restart count 0 May 21 17:05:57.643: INFO: overcommit-16 from sched-pred-5312 started at 2021-05-21 17:05:46 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container overcommit-16 ready: true, restart count 0 May 21 17:05:57.643: INFO: overcommit-17 from sched-pred-5312 started at 2021-05-21 17:05:46 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container overcommit-17 ready: true, restart count 0 May 21 17:05:57.643: INFO: overcommit-18 from sched-pred-5312 started at 2021-05-21 17:05:46 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container overcommit-18 ready: true, restart count 0 May 21 17:05:57.643: INFO: overcommit-19 from sched-pred-5312 started at 2021-05-21 17:05:46 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container overcommit-19 ready: true, restart count 0 May 21 17:05:57.643: INFO: overcommit-2 from sched-pred-5312 started at 2021-05-21 17:05:46 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container overcommit-2 ready: true, restart count 0 May 21 17:05:57.643: INFO: overcommit-8 from sched-pred-5312 started at 2021-05-21 17:05:46 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container overcommit-8 ready: true, restart count 0 May 21 17:05:57.643: INFO: overcommit-9 from sched-pred-5312 started at 2021-05-21 17:05:46 +0000 UTC (1 container statuses recorded) May 21 17:05:57.643: INFO: Container overcommit-9 ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-5016fd6e-084c-41e5-aaed-3ce9f10bcf24=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-0f04e143-25e9-4dc8-b488-dea3ca2b65f3 testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.168123d971cfa0be], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4195/without-toleration to kali-worker] STEP: Considering event: Type = [Normal], Name = [without-toleration.168123d991607f4e], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.47/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.168123d99e79ea28], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-toleration.168123d99fa2c98b], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.168123d9a95588e4], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.168123d9ea1676b5], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.168123d9ec3526e9], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-5016fd6e-084c-41e5-aaed-3ce9f10bcf24: testing-taint-value}, that the pod didn't tolerate, 2 node(s) didn't match node selector.] STEP: Considering event: Type = [Normal], Name = [without-toleration.168123da05457b26], Reason = [SandboxChanged], Message = [Pod sandbox changed, it will be killed and re-created.] STEP: Considering event: Type = [Warning], Name = [without-toleration.168123da0b14cbc8], Reason = [FailedCreatePodSandBox], Message = [Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a661d118e26a4e7d30f9978fd73ac6004f811e22f6256704a6a33ba805ecfc34": Multus: [sched-pred-4195/without-toleration]: error getting pod: pods "without-toleration" not found] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.168123d9ec3526e9], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-5016fd6e-084c-41e5-aaed-3ce9f10bcf24: testing-taint-value}, that the pod didn't tolerate, 2 node(s) didn't match node selector.] STEP: Considering event: Type = [Normal], Name = [without-toleration.168123d971cfa0be], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4195/without-toleration to kali-worker] STEP: Considering event: Type = [Normal], Name = [without-toleration.168123d991607f4e], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.47/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.168123d99e79ea28], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-toleration.168123d99fa2c98b], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.168123d9a95588e4], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.168123d9ea1676b5], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.168123da05457b26], Reason = [SandboxChanged], Message = [Pod sandbox changed, it will be killed and re-created.] STEP: Considering event: Type = [Warning], Name = [without-toleration.168123da0b14cbc8], Reason = [FailedCreatePodSandBox], Message = [Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a661d118e26a4e7d30f9978fd73ac6004f811e22f6256704a6a33ba805ecfc34": Multus: [sched-pred-4195/without-toleration]: error getting pod: pods "without-toleration" not found] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-5016fd6e-084c-41e5-aaed-3ce9f10bcf24=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.168123da8744ed82], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4195/still-no-tolerations to kali-worker] STEP: removing the label kubernetes.io/e2e-label-key-0f04e143-25e9-4dc8-b488-dea3ca2b65f3 off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-0f04e143-25e9-4dc8-b488-dea3ca2b65f3 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-5016fd6e-084c-41e5-aaed-3ce9f10bcf24=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 17:06:02.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4195" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:5.178 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":12,"completed":12,"skipped":5296,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 21 17:06:02.763: INFO: Running AfterSuite actions on all nodes May 21 17:06:02.763: INFO: Running AfterSuite actions on node 1 May 21 17:06:02.763: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":12,"completed":12,"skipped":5472,"failed":0} Ran 12 of 5484 Specs in 480.111 seconds SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 5472 Skipped PASS Ginkgo ran 1 suite in 8m1.700492114s Test Suite Passed